Artist Joseph Nechvatal has a longstanding interest in viruses, i.e., computer viruses and that work seems strangely apt as we cope with the COVID-19 pandemic. He very kindly sent me some à propos information (received via an April 5, 2020 email),
I wanted to let you know that _viral symphOny_ (2006-2008), my 1 hour 40 minute collaborative electronic noise music symphony, created using custom artificial life C++ software based on the viral phenomenon model, is available to the world for free here:
Before you click the link and dive in you might find these bits of information interesting. BTW, I do provide the link again at the end of this post.
Origin of and concept behind the term ‘computer virus’
As I’ve learned to expect, there are two and possibly more origin stories for the term ‘computer virus’. Refreshingly, there is near universal agreement in the material I’ve consulted about John von Neuman’s role as the originator of the concept. After that, it gets more complicated; Wikipedia credits a writer for christening the term (Note: Links have been removed),
The first academic work on the theory of self-replicating computer programs was done in 1949 by John von Neumann who gave lectures at the University of Illinois about the “Theory and Organization of Complicated Automata”. The work of von Neumann was later published as the “Theory of self-reproducing automata”. In his essay von Neumann described how a computer program could be designed to reproduce itself. Von Neumann’s design for a self-reproducing computer program is considered the world’s first computer virus, and he is considered to be the theoretical “father” of computer virology. In 1972, Veith Risak directly building on von Neumann’s work on self-replication, published his article “Selbstreproduzierende Automaten mit minimaler Informationsübertragung” (Self-reproducing automata with minimal information exchange). The article describes a fully functional virus written in assembler programming language for a SIEMENS 4004/35 computer system. In 1980 Jürgen Kraus wrote his diplom thesis “Selbstreproduktion bei Programmen” (Self-reproduction of programs) at the University of Dortmund. In his work Kraus postulated that computer programs can behave in a way similar to biological viruses.
The first known description of a self-reproducing program in a short story occurs in 1970 in The Scarred Man by Gregory Benford [emphasis mine] which describes a computer program called VIRUS which, when installed on a computer with telephone modem dialing capability, randomly dials phone numbers until it hit a modem that is answered by another computer. It then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE.
The idea was explored further in two 1972 novels, When HARLIE Was One by David Gerrold and The Terminal Man by Michael Crichton, and became a major theme of the 1975 novel The Shockwave Rider by John Brunner.
The 1973 Michael Crichton sci-fi movie Westworld made an early mention of the concept of a computer virus, being a central plot theme that causes androids to run amok. Alan Oppenheimer’s character summarizes the problem by stating that “…there’s a clear pattern here which suggests an analogy to an infectious disease process, spreading from one…area to the next.” To which the replies are stated: “Perhaps there are superficial similarities to disease” and, “I must confess I find it difficult to believe in a disease of machinery.”
Scientific American has an October 19, 2001 article citing four different experts’ answer to the question “When did the term ‘computer virus’ arise?” Three of the experts cite academics as the source for the term (usually Fred Cohen). One of the experts does mention writers (for the most part, not the same writers cited in the Wikipedia entry quotation in the above).
One expert discusses the concept behind the term and confirms what most people will suspect. Interestingly, this expert’s origin story varies somewhat from the other three.
The concept behind the first malicious computer programs was described years ago in the Computer Recreations column of Scientific American. The metaphor of the “computer virus” was adopted because of the similarity in form, function and consequence with biological viruses that attack the human system. Computer viruses can insert themselves in another program, taking over control or adversely affecting the function of the program.
Like their biological counterparts, computer viruses can spread rapidly and self-replicate systematically. They also mimic living viruses in the way they must adapt through mutation [emphases mine] to the development of resistance within a system: the author of a computer virus must upgrade his creation in order to overcome the resistance (antiviral programs) or to take advantage of new weakness or loophole within the system.
Computer viruses also act like biologics [emphasis mine] in the way they can be set off: they can be virulent from the outset of the infection, or they can be activated by a specific event (logic bomb). But computer viruses can also be triggered at a specific time (time bomb). Most viruses act innocuous towards a system until their specific condition is met.
The computer industry has expanded the metaphor to now include terms like inoculation, disinfection, quarantine and sanitation [emphases mine]. Now if your system gets infected by a computer virus you can quarantine it until you can call the “virus doctor” who can direct you to the appropriate “virus clinic” where your system can be inoculated and disinfected and an anti-virus program can be prescribed.
More about Joseph Nechvatal and his work on viruses
The similarities between computer and biological viruses are striking and with that in mind, here’s a clip featuring part of viral symphOny,
Before giving you a second link to Nechvatal’s entire viral symphOny, here’s some context about him and his work, from the Joseph Nechvatal Wikipedia entry, (Note: Links have been removed),
He began using computers to make “paintings” in 1986  and later, in his signature work, began to employ computer viruses. These “collaborations” with viral systems positioned his work as an early contribution to what is increasingly referred to as a post-human aesthetic.
From 1991–1993 he was artist-in-residence at the Louis Pasteur Atelier in Arbois, France and at the Saline Royale/Ledoux Foundation’s computer lab. There he worked on The Computer Virus Project, which was an artistic experiment with computer viruses and computer animation. He exhibited at Documenta 8 in 1987.
In 1999 Nechvatal obtained his Ph.D. in the philosophy of art and new technology concerning immersive virtual reality at Roy Ascott’s Centre for Advanced Inquiry in the Interactive Arts (CAiiA), University of Wales College, Newport, UK (now the Planetary Collegium at the University of Plymouth). There he developed his concept of viractualism, a conceptual art idea that strives “to create an interface between the biological and the technological.” According to Nechvatal, this is a new topological space.
In 2002 he extended his experimentation into viral artificial life through a collaboration with the programmer Stephane Sikora of music2eye in a work called the Computer Virus Project II, inspired by the a-life work of John Horton Conway (particularly Conway’s Game of Life), by the general cellular automata work of John von Neumann, by the genetic programming algorithms of John Koza and the auto-destructive art of Gustav Metzger.
In 2005 he exhibited Computer Virus Project II works (digital paintings, digital prints, a digital audio installation and two live electronic virus-attack art installations) in a solo show called cOntaminatiOns at Château de Linardié in Senouillac, France. In 2006 Nechvatal received a retrospective exhibition entitled Contaminations at the Butler Institute of American Art’s Beecher Center for Arts and Technology.
Dr. Nechvatal has also contributed to digital audio work with his noise music viral symphOny [emphasis mine], a collaborative sound symphony created by using his computer virus software at the Institute for Electronic Arts at Alfred University.viral symphOny was presented as a part of nOise anusmOs in New York in 2012.
Gold stars for everyone who recognized the loose paraphrasing of the title, Love in the Time of Cholera, for Gabrial Garcia Marquez’s 1985 novel.
I wrote my headline and first paragraph yesterday and found this in my email box this morning, from a March 25, 2020 University of British Columbia news release, which compares times, diseases, and scares of the past with today’s COVID-19 (Perhaps politicians and others could read this piece and stop using the word ‘unprecedented’ when discussing COVID-19?),
How globalization stoked fear of disease during the Romantic era
In the late 18th and early 19th centuries, the word “communication” had several meanings. People used it to talk about both media and the spread of disease, as we do today, but also to describe transport—via carriages, canals and shipping.
Miranda Burgess, an associate professor in UBC’s English department, is working on a book called Romantic Transport that covers these forms of communication in the Romantic era and invites some interesting comparisons to what the world is going through today.
We spoke with her about the project.
What is your book about?
It’s about global infrastructure at the dawn of globalization—in particular the extension of ocean navigation through man-made inland waterways like canals and ship’s canals. These canals of the late 18th and early 19th century were like today’s airline routes, in that they brought together places that were formerly understood as far apart, and shrunk time because they made it faster to get from one place to another.
This book is about that history, about the fears that ordinary people felt in response to these modernizations, and about the way early 19th-century poets and novelists expressed and responded to those fears.
What connections did those writers make between transportation and disease?
In the 1810s, they don’t have germ theory yet, so there’s all kinds of speculation about how disease happens. Works of tropical medicine, which is rising as a discipline, liken the human body to the surface of the earth. They talk about nerves as canals that convey information from the surface to the depths, and the idea that somehow disease spreads along those pathways.
When the canals were being built, some writers opposed them on the grounds that they could bring “strangers” through the heart of the city, and that standing water would become a breeding ground for disease. Now we worry about people bringing disease on airplanes. It’s very similar to that.
What was the COVID-19 of that time?
Probably epidemic cholera [emphasis mine], from about the 1820s onward. The Quarterly Review, a journal that novelist Walter Scott was involved in editing, ran long articles that sought to trace the map of cholera along rivers from South Asia, to Southeast Asia, across Europe and finally to Britain. And in the way that its spread is described, many of the same fears that people are evincing now about COVID-19 were visible then, like the fear of clothes. Is it in your clothes? Do we have to burn our clothes? People were concerned.
What other comparisons can be drawn between those times and what is going on now?
Now we worry about the internet and “fake news.” In the 19th century, they worried about what William Wordsworth called “the rapid communication of intelligence,” which was the daily newspaper. Not everybody had access to newspapers, but each newspaper was read by multiple families and newspapers were available in taverns and coffee shops. So if you were male and literate, you had access to a newspaper, and quite a lot of women did, too.
Paper was made out of rags—discarded underwear. Because of the French Revolution and Napoleonic Wars that followed, France blockaded Britain’s coast and there was a desperate shortage of rags to make paper, which had formerly come from Europe. And so Britain started to import rags from the Caribbean that had been worn by enslaved people.
Papers of the time are full of descriptions of the high cost of rags, how they’re getting their rags from prisons, from prisoners’ underwear, and fear about the kinds of sweat and germs that would have been harboured in those rags—and also discussions of scarcity, as people stole and hoarded those rags. It rings very well with what the internet is telling us now about a bunch of things around COVID-19.
Pietsch, who is also curator emeritus of fishes at the Burke Museum of Natural History and Culture, has published over 200 articles and a dozen books on the biology and behavior of marine fishes. He wrote this book with Rachel J. Arnold, a faculty member at Northwest Indian College in Bellingham and its Salish Sea Research Center.
These walking fishes have stepped into the spotlight lately, with interest growing in recent decades. And though these predatory fishes “will almost certainly devour anything else that moves in a home aquarium,” Pietsch writes, “a cadre of frogfish aficionados around the world has grown within the dive community and among aquarists.” In fact, Pietsch said, there are three frogfish public groups on Facebook, with more than 6,000 members.
First, what is a frogfish?
Ted Pietsch: A member of a family of bony fishes, containing 52 species, all of which are highly camouflaged and whose feeding strategy consists of mimicking the immobile, inert, and benign appearance of a sponge or an algae-encrusted rock, while wiggling a highly conspicuous lure to attract prey.
This is a fish that “walks” and “hops” across the sea bottom, and clambers about over rocks and coral like a four-legged terrestrial animal but, at the same time, can jet-propel itself through open water. Some lay their eggs encapsulated in a complex, floating, mucus mass, called an “egg raft,” while some employ elaborate forms of parental care, carrying their eggs around until they hatch.
They are among the most colorful of nature’s productions, existing in nearly every imaginable color and color pattern, with an ability to completely alter their color and pattern in a matter of days or seconds. All these attributes combined make them one of the most intriguing groups of aquatic vertebrates for the aquarist, diver, and underwater photographer as well as the professional zoologist.
I couldn’t resist the ‘frog’ reference and I’m glad since this is a good read with a number of fascinating photographs and illustrations.,
A March 24, 2020 news item on phys.org features the future of building construction as perceived by synthetic biologists,
Buildings are not unlike a human body. They have bones and skin; they breathe. Electrified, they consume energy, regulate temperature and generate waste. Buildings are organisms—albeit inanimate ones.
But what if buildings—walls, roofs, floors, windows—were actually alive—grown, maintained and healed by living materials? Imagine architects using genetic tools that encode the architecture of a building right into the DNA of organisms, which then grow buildings that self-repair, interact with their inhabitants and adapt to the environment.
A March 23, 2020 essay by Wil Srubar (Professor of Architectural Engineering and Materials Science, University of Colorado Boulder), which originated the news item, provides more insight,
Living architecture is moving from the realm of science fiction into the laboratory as interdisciplinary teams of researchers turn living cells into microscopic factories. At the University of Colorado Boulder, I lead the Living Materials Laboratory. Together with collaborators in biochemistry, microbiology, materials science and structural engineering, we use synthetic biology toolkits to engineer bacteria to create useful minerals and polymers and form them into living building blocks that could, one day, bring buildings to life.
In our most recent work, published in Matter, we used photosynthetic cyanobacteria to help us grow a structural building material – and we kept it alive. Similar to algae, cyanobacteria are green microorganisms found throughout the environment but best known for growing on the walls in your fish tank. Instead of emitting CO2, cyanobacteria use CO2 and sunlight to grow and, in the right conditions, create a biocement, which we used to help us bind sand particles together to make a living brick.
By keeping the cyanobacteria alive, we were able to manufacture building materials exponentially. We took one living brick, split it in half and grew two full bricks from the halves. The two full bricks grew into four, and four grew into eight. Instead of creating one brick at a time, we harnessed the exponential growth of bacteria to grow many bricks at once – demonstrating a brand new method of manufacturing materials.
Researchers have only scratched the surface of the potential of engineered living materials. Other organisms could impart other living functions to material building blocks. For example, different bacteria could produce materials that heal themselves, sense and respond to external stimuli like pressure and temperature, or even light up. If nature can do it, living materials can be engineered to do it, too.
It also take less energy to produce living buildings than standard ones. Making and transporting today’s building materials uses a lot of energy and emits a lot of CO2. For example, limestone is burned to make cement for concrete. Metals and sand are mined and melted to make steel and glass. The manufacture, transport and assembly of building materials account for 11% of global CO2 emissions. Cement production alone accounts for 8%. In contrast, some living materials, like our cyanobacteria bricks, could actually sequester CO2.
The field of engineered living materials is in its infancy, and further research and development is needed to bridge the gap between laboratory research and commercial availability. Challenges include cost, testing, certification and scaling up production. Consumer acceptance is another issue. For example, the construction industry has a negative perception of living organisms. Think mold, mildew, spiders, ants and termites. We’re hoping to shift that perception. Researchers working on living materials also need to address concerns about safety and biocontamination.
The [US] National Science Foundation recently named engineered living materials one of the country’s key research priorities. Synthetic biology and engineered living materials will play a critical role in tackling the challenges humans will face in the 2020s and beyond: climate change, disaster resilience, aging and overburdened infrastructure, and space exploration.
If you have time and interest, this is fascinating. Strubar is a little exuberant and, at this point, I welcome it.
With the significant part of the global population forced to work from home, the occurrence of lower back pain may increase. Lithuanian scientists have devised a spinal stabilisation exercise programme for managing lower back pain for people who perform a sedentary job. After testing the programme with 70 volunteers, the researchers have found that the exercises are not only efficient in diminishing the non-specific lower back pain, but their effect lasts 3 times longer than that of a usual muscle strengthening exercise programme.
According to the World Health Organisation, lower back pain is among the top 10 diseases and injuries that are decreasing the quality of life across the global population. It is estimated that non-specific low back pain is experienced by 60% to 70% of people in industrialised societies. Moreover, it is the leading cause of activity limitation and work absence throughout much of the world. For example, in the United Kingdom, low back pain causes more than 100 million workdays lost per year, in the United States – an estimated 149 million.
Chronic lower back pain, which starts from long-term irritation or nerve injury affects the emotions of the afflicted. Anxiety, bad mood and even depression, also the malfunctioning of the other bodily systems – nausea, tachycardia, elevated arterial blood pressure – are among the conditions, which may be caused by lower back pain.
During the coronavirus disease (COVID-19) outbreak, with a significant part of the global population working from home and not always having a properly designed office space, the occurrence of lower back pain may increase.
“Lower back pain is reaching epidemic proportions. Although it is usually clear what is causing the pain and its chronic nature, people tend to ignore these circumstances and are not willing to change their lifestyle. Lower back pain usually comes away itself, however, the chances of the recurring pain are very high”, says Dr Irina Klizienė, a researcher at Kaunas University of Technology (KTU) Faculty of Social Sciences, Humanities and Arts.
Dr Klizienė, together with colleagues from KTU and from Lithuanian Sports University has designed a set of stabilisation exercises aimed at strengthening the muscles which support the spine at the lower back, i.e. lumbar area. The exercise programme is based on Pilates methodology.
According to Dr Klizienė, the stability of lumbar segments is an essential element of body biomechanics. Previous research evidence shows that in order to avoid the lower back pain it is crucial to strengthen the deep muscles, which are stabilising the lumbar area of the spine. One of these muscles is multifidus muscle.
“Human central nervous system is using several strategies, such as preparing for keeping the posture, preliminary adjustment to the posture, correcting the mistakes of the posture, which need to be rectified by specific stabilising exercises. Our aim was to design a set of exercises for this purpose”, explains Dr Klizienė.
The programme, designed by Dr Klizienė and her colleagues is comprised of static and dynamic exercises, which train the muscle strength and endurance. The static positions are to be held from 6 to 20 seconds; each exercise to be repeated 8 to 16 times.
The previous set is a little puzzling but perhaps you’ll find these ones below easier to follow,
I think more pictures of intervening moves would have been useful. Now. getting back to the press release,
In order to check the efficiency of the programme, 70 female volunteers were randomly enrolled either to the lumbar stabilisation exercise programme or to a usual muscle strengthening exercise programme. Both groups were exercising twice a week for 45 minutes for 20 weeks. During the experiment, ultrasound scanning of the muscles was carried out.
As soon as 4 weeks in lumbar stabilisation programme, it was observed that the cross-section area of the multifidus muscle of the subjects of the stabilisation group has increased; after completing the programme, this increase was statistically significant (p < 0,05). This change was not observed in the strengthening group.
Moreover, although both sets of exercises were efficient in eliminating lower back pain and strengthening the muscles of the lower back area, the effect of stabilisation exercises lasted 3 times longer – 12 weeks after the completion of the stabilisation programme against 4 weeks after the completion of the muscle strengthening programme.
“There are only a handful of studies, which have directly compared the efficiency of stabilisation exercises against other exercises in eliminating lower back pain”, says Dr Klizienė, “however, there are studies proving that after a year, lower back pain returned only to 30% of people who have completed a stabilisation exercise programme, and to 84% of people who haven’t taken these exercises. After three years these proportions are 35% and 75%.”
According to her, research shows that the spine stabilisation exercises are more efficient than medical intervention or usual physical activities in curing the lower back pain and avoiding the recurrence of the symptoms in the future.
The celebrated painter Jackson Pollock created his most iconic works not with a brush, but by pouring paint onto the canvas from above, weaving sinuous filaments of color into abstract masterpieces. A team of researchers analyzing the physics of Pollock’s technique has shown that the artist had a keen understanding of a classic phenomenon in fluid dynamics — whether he was aware of it or not.
In a paper published in the journal PLOS ONE, the researchers show that Pollock’s technique seems to intentionally avoid what’s known as coiling instability — the tendency of a viscous fluid to form curls and coils when poured on a surface.
“Like most painters, Jackson Pollock went through a long process of experimentation in order to perfect his technique,” said Roberto Zenit, a professor in Brown’s School of Engineering and senior author on the paper. “What we were trying to do with this research is figure out what conclusions Pollock reached in order to execute his paintings the way he wanted. Our main finding in this paper was that Pollock’s movements and the properties of his paints were such he avoided this coiling instability.”
Pollock’s technique typically involved pouring paint straight from a can or along a stick onto a canvas lying horizontally on the floor. It’s often referred to as the “drip technique,” but that’s a bit of a misnomer in the parlance of fluid mechanics, Zenit says. In fluid mechanics, “dripping” would be dispensing the fluid in a way that makes discrete droplets on the canvas. Pollock largely avoided droplets, in favor of unbroken filaments of paint stretching across the canvas.
In order to understand exactly how the technique worked, Zenit and colleagues from the Universidad Nacional Autonoma de Mexico analyzed extensive video of Pollock at work, taking careful measure of how fast he moved and how far from the canvas he poured his paints. Having gathered data on how Pollock worked, the researchers used an experimental setup to recreate his technique. Using the setup, the researchers could deposit paint using a syringe mounted at varying heights onto a canvas moving at varying speeds. The experiments helped to zero in on the most important aspects of what Pollock was doing.
“We can vary one thing at a time so we can decipher the key elements of the technique,” Zenit said. “For example, we could vary the height from which the paint is poured and keep the speed constant to see how that changes things.”
The researchers found that the combination of Pollock’s hand speed, the distance he maintained from the canvas and the viscosity of his paint seem to be aimed at avoiding coiling instability. Anyone who’s ever poured a viscous fluid — perhaps some honey on toast — has likely seen some coiling instability. When a small amount of a viscous fluid is poured, it tends to stack up like a coil of rope before oozing across the surface.
In the context of Pollock’s technique, the instability can result in paint filaments making pigtail-like curls when poured from the can. Some prior research had concluded that that the curved lines in Pollock’s paintings were a result of this instability, but this latest research shows the opposite.
“What we found is that he moved his hand at a sufficiently high speed and a sufficiently short height such that this coiling would not occur,” Zenit said.
Zenit says the findings could be useful in authenticating Pollock’s works. Too many tight curls might suggest that a drip-style painting is not a Pollock. The work could also inform other settings in which viscous fluids are stretched into filaments, such as the manufacture of fiber optics. But Zenit says his main interest in the work is that it’s simply a fascinating way to explore interesting questions in fluid mechanics.
“I consider myself to be a fluid mechanics messenger,” he said. “This is my excuse to talk science. It’s fascinating to see that painters are really fluid mechanicians, even though they may not know it.”
I could not find any videos related to this research that I know how to embed but Palacios, Zetina, and Zenit have investigated Polock’s ‘physics’ before,
If you want to see Pollock dripping his painting in action, there’s a 10 min. 13 secs. film made in 1950 (Note: Links have been removed from text; link to 10 min. film is below),
In the summer of 1950, Hans Namuth approached Jackson Pollock and asked the abstract expressionist painter if he could photograph him in his studio, working with his “drip” technique of painting. When Namuth arrived, he found:
“A dripping wet canvas covered the entire floor. Blinding shafts of sunlight hit the wet canvas, making its surface hard to see. There was complete silence…. Pollock looked at the painting. Then unexpectedly, he picked up can and paintbrush and started to move around the canvas. It was as if he suddenly realized the painting was not finished. His movements, slow at first, gradually became faster and more dancelike as he flung black, white and rust-colored paint onto the canvas.”
The images from this shoot “helped transform Pollock from a talented, cranky loner into the first media-driven superstar of American contemporary art, the jeans-clad, chain-smoking poster boy of abstract expressionism,” one critic later wrote in The Washington Post.
You can find the film and accompanying Open Culture text intact with links here.
This supremacy, refers to an engineering milestone and a October 23, 2019 news item on ScienceDaily announces the milestone has been reached,
Researchers in UC [University of California] Santa Barbara/Google scientist John Martinis’ group have made good on their claim to quantum supremacy. Using 53 entangled quantum bits (“qubits”), their Sycamore computer has taken on — and solved — a problem considered intractable for classical computers.
“A computation that would take 10,000 years on a classical supercomputer took 200 seconds on our quantum computer,” said Brooks Foxen, a graduate student researcher in the Martinis Group. “It is likely that the classical simulation time, currently estimated at 10,000 years, will be reduced by improved classical hardware and algorithms, but, since we are currently 1.5 trillion times faster, we feel comfortable laying claim to this achievement.”
The feat is outlined in a paper in the journal Nature.
The milestone comes after roughly two decades of quantum computing research conducted by Martinis and his group, from the development of a single superconducting qubit to systems including architectures of 72 and, with Sycamore, 54 qubits (one didn’t perform) that take advantage of the both awe-inspiring and bizarre properties of quantum mechanics.
“The algorithm was chosen to emphasize the strengths of the quantum computer by leveraging the natural dynamics of the device,” said Ben Chiaro, another graduate student researcher in the Martinis Group. That is, the researchers wanted to test the computer’s ability to hold and rapidly manipulate a vast amount of complex, unstructured data.
“We basically wanted to produce an entangled state involving all of our qubits as quickly as we can,” Foxen said, “and so we settled on a sequence of operations that produced a complicated superposition state that, when measured, returns bitstring with a probability determined by the specific sequence of operations used to prepare that particular superposition. The exercise, which was to verify that the circuit’s output correspond to the equence used to prepare the state, sampled the quantum circuit a million times in just a few minutes, exploring all possibilities — before the system could lose its quantum coherence.
‘A complex superposition state’
“We performed a fixed set of operations that entangles 53 qubits into a complex superposition state,” Chiaro explained. “This superposition state encodes the probability distribution. For the quantum computer, preparing this superposition state is accomplished by applying a sequence of tens of control pulses to each qubit in a matter of microseconds. We can prepare and then sample from this distribution by measuring the qubits a million times in 200 seconds.”
“For classical computers, it is much more difficult to compute the outcome of these operations because it requires computing the probability of being in any one of the 2^53 possible states, where the 53 comes from the number of qubits — the exponential scaling is why people are interested in quantum computing to begin with,” Foxen said. “This is done by matrix multiplication, which is expensive for classical computers as the matrices become large.”
According to the new paper, the researchers used a method called cross-entropy benchmarking to compare the quantum circuit’s output (a “bitstring”) to its “corresponding ideal probability computed via simulation on a classical computer” to ascertain that the quantum computer was working correctly.
“We made a lot of design choices in the development of our processor that are really advantageous,” said Chiaro. Among these advantages, he said, are the ability to experimentally tune the parameters of the individual qubits as well as their interactions.
While the experiment was chosen as a proof-of-concept for the computer, the research has resulted in a very real and valuable tool: a certified random number generator. Useful in a variety of fields, random numbers can ensure that encrypted keys can’t be guessed, or that a sample from a larger population is truly representative, leading to optimal solutions for complex problems and more robust machine learning applications. The speed with which the quantum circuit can produce its randomized bit string is so great that there is no time to analyze and “cheat” the system.
“Quantum mechanical states do things that go beyond our day-to-day experience and so have the potential to provide capabilities and application that would otherwise be unattainable,” commented Joe Incandela, UC Santa Barbara’s vice chancellor for research. “The team has demonstrated the ability to reliably create and repeatedly sample complicated quantum states involving 53 entangled elements to carry out an exercise that would take millennia to do with a classical supercomputer. This is a major accomplishment. We are at the threshold of a new era of knowledge acquisition.”
With an achievement like “quantum supremacy,” it’s tempting to think that the UC Santa Barbara/Google researchers will plant their flag and rest easy. But for Foxen, Chiaro, Martinis and the rest of the UCSB/Google AI Quantum group, this is just the beginning.
“It’s kind of a continuous improvement mindset,” Foxen said. “There are always projects in the works.” In the near term, further improvements to these “noisy” qubits may enable the simulation of interesting phenomena in quantum mechanics, such as thermalization, or the vast amount of possibility in the realms of materials and chemistry.
In the long term, however, the scientists are always looking to improve coherence times, or, at the other end, to detect and fix errors, which would take many additional qubits per qubit being checked. These efforts have been running parallel to the design and build of the quantum computer itself, and ensure the researchers have a lot of work before hitting their next milestone.
“It’s been an honor and a pleasure to be associated with this team,” Chiaro said. “It’s a great collection of strong technical contributors with great leadership and the whole team really synergizes well.”
Here’s a link to and a citation for the paper,
Quantum supremacy using a programmable superconducting processor by Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis. Nature volume 574, pages505–510 (2019) DOI: https://doi.org/10.1038/s41586-019-1666-5 Issue Date 24 October 2019
Here’s more from the March 3, 2020 ArtSci Salon announcements (received via email),
Sensorium Centre for Digital Arts and Technologies, ArtSci Salon, Cultivamos Cultura and Arte Institute present:
FACTT 2020: FESTIVAL ART AND SCIENCE Exhibition Monday, March 9th – Thursday, March 12th, 2020 11:00am-4:00pm Gales Gallery (Accolade West Room 105) York University
Exhibition Opening: March 9th from 6:00-7:30pm
Subway Stop, York University. Exit on the left – Accolade West is the building on the left
Don’t miss the 2020 Festival of Art and Science Exhibition – (Be)-Coming An Exhibition of Experimental Contemporary Art, co-sponsored by Sensorium: Centre for Digital Arts and Technology, ArtSci Salon, Arte Institute and Cultivamos Cultura. The exhibition features the work of invited artists from Portugal and North America, and AMPD students [I believe they are referring to students at York University’s School of the Arts, Media, Performance & Design]. The exhibition is curated by Marta DeMenezes [sic], Roberta Buiani and Joel Ong.
All are welcome to attend the exhibition opening which will take place on March 9th from 6:00-7:30pm in the Gales Gallery at York University.
FACTT 2020 – (BE) COMING An Exhibition of Experimental Contemporary Art is about the impermanence of becoming permanent. A transformation is an extreme, radical change. The unavoidability of changes is a constant process we have throughout our lives. We may not always be aware of it, and often just spend so much energy avoiding this “law of nature” that we forget it exists and thrives for stability. (BE) COMING is an exhibition about change, the impossibility of not changing, the perpetual impermanence and the process of becoming. As we become aware of the need to change in our world, in our planet and our lives, it feels necessary to remember that life is a dynamic process. Life is a consistent process of transformation and adaptation. Art, more than any other human endeavour, is a reflection of this aspect of life and therefore the best way to remember the process of being something different, something else, something more, or something less, while becoming ourselves.
****ETA March 11, 2020: CANCELLED. The Marta De Menezes talk has been cancelled****
According to the March 3, 2020 announcement, there’s another event associated with FACTT 2020; artist Marta De Menezes is being featured in a talk,
Sensorium Winter Lunchtime Seminar Series featuring: Marta De Menezes [sic]
Wednesday, March 11th, 2020 11:30am-12:30pm The Sensorium Research Loft [York University} 4th Floor GCFA, Room M333 RSVP to firstname.lastname@example.org
Our second Sensorium Winter Lunchtime Seminar Series event of March will feature pioneering bio-artist Marta De Menezes [sic] who explores the use of biology and biotechnology as new art media and in conducting her practice in research laboratories that are her art studio.
The 26th annual International Symposium on Electronic Arts (ISEA): Why Sentience? is being held from May 19 – 24, 2020 in Montreal, Canada and organizers have sen,t via email, a March 3, 2020 announcement,
DISCOVER THE PRELIMINARY PROGRAMMING!
Below is the list of accepted authors* from the call for submissions to ISEA2020. *Speakers are confirmed upon registration
Professor in the Communication department at Université de Montréal Agronomist (ENSA Montpellier, 1986) and sociologist (Ph.D. Paris X Nanterre, 1991), Thierry Bardini is full professor in the department of communication at the Université de Montréal, where he has been teaching since 1993. From 1990 to 1993, he was a visiting scholar and adjunct professor at the Annenberg School for communication at the University of Southern California, under the supervision of Everett M. Rogers. His research interests concern the contemporary cyberculture, from the production and uses of information and communication technologies to molecular biology. He is the author of Bootstrapping: Douglas Engelbart, Coevolution and the Genesis of Personal Computing (Stanford University Press, 2000), Junkware (University of Minnesota Press, 2011) and Journey to the End of the Species (in collaboration with Dominique Lestel, Éditions Dis Voir, Paris, 2011). Thierry Bardini is currently working on his first research-creation project, Toward the Fourth Nature, with Beatriz Herrera and François-Joseph Lapointe.
Jolene Rickard, Ph.D. is a visual historian, artist and curator interested in the intersection of Indigenous knowledge and contemporary art, materiality, and ecocriticism with an emphasis on Hodinöhsö:ni aesthetics. A selection of publications includes: Diversifying Sovereignty and the Reception of Indigenous Art, Art Journal 76, no. 2 (2017), Aesthetics, Violence and Indigeneity, Public 27, no. 54 (Winter 2016), The Emergence of Global Indigenous Art, Sakahán, National Gallery of Canada (2013), and Visualizing Sovereignty in the Time of Biometric Sensors, The South Atlantic Quarterly: (2011). Recent exhibitions include the Minneapolis Institute of Arts, Hearts of Our People: Native Women Artists, 2019-2021, Crystal Bridges Museum of Art, Art For a New Understanding: Native Voices, 1950’s to Now, 2018-2020. Jolene is a 2020 Fulbright Research Scholar at McMaster University, ON, an Associate Professor in the departments of History of Art and Art, and the former Director of the American Indian and Indigenous Studies Program 2008-2020 (AIISP) at Cornell University, Ithaca, NY. Jolene is from the Tuscarora Nation (Turtle Clan), Hodinöhsö:ni Confederacy.
Lecturer in the Department of Visual Cultures at Goldsmiths, University of London.
Dr. Ramon Amaro, Ph.D. is a Lecturer in the Department of
Visual Cultures at Goldsmiths, University of London. Previously he was
Research Fellow in Digital Culture at Het Nieuwe Instituut in Rotterdam
and visiting tutor in Media Theory at the Royal Academy of Art, The
Hague, NL (KABK). Ramon completed his PhD in Philosophy at Goldsmiths,
while holding a Masters degree in Sociological Research from the
University of Essex and a BSe in Mechanical Engineering from the
University of Michigan, Ann Arbor. He has worked as Assistant Editor for
the SAGE open access journal Big Data & Society; quality design
engineer for General Motors; and programmes manager for the American
Society of Mechanical Engineers (ASME). His research interests include
machine learning, the philosophies of mathematics and engineering,
critical Black thought, and philosophies of being.
Weaving a quantum processor from light is a jaw-dropping event (as far as I’m concerned). An October 17, 2019 news item on phys.org makes the announcement,
An international team of scientists from Australia, Japan and the United States has produced a prototype of a large-scale quantum processor made of laser light.
Based on a design ten years in the making, the processor has built-in scalability that allows the number of quantum components—made out of light—to scale to extreme numbers. The research was published in Science today [October 18, 2019; Note: I cannot explain the discrepancy between the dates]].
Quantum computers promise fast solutions to hard problems, but to do this they require a large number of quantum components and must be relatively error free. Current quantum processors are still small and prone to errors. This new design provides an alternative solution, using light, to reach the scale required to eventually outperform classical computers on important problems.
“While today’s quantum processors are impressive, it isn’t clear if the current designs can be scaled up to extremely large sizes,” notes Dr Nicolas Menicucci, Chief Investigator at the Centre for Quantum Computation and Communication Technology (CQC2T) at RMIT University in Melbourne, Australia.
“Our approach starts with extreme scalability – built in from the very beginning – because the processor, called a cluster state, is made out of light.”
Using light as a quantum processor
A cluster state is a large collection of entangled quantum components that performs quantum computations when measured in a particular way.
“To be useful for real-world problems, a cluster state must be both large enough and have the right entanglement structure. In the two decades since they were proposed, all previous demonstrations of cluster states have failed on one or both of these counts,” says Dr Menicucci. “Ours is the first ever to succeed at both.”
To make the cluster state, specially designed crystals convert ordinary laser light into a type of quantum light called squeezed light, which is then weaved into a cluster state by a network of mirrors, beamsplitters and optical fibres.
The team’s design allows for a relatively small experiment to generate an immense two-dimensional cluster state with scalability built in. Although the levels of squeezing – a measure of quality – are currently too low for solving practical problems, the design is compatible with approaches to achieve state-of-the-art squeezing levels.
The team says their achievement opens up new possibilities for quantum computing with light.
“In this work, for the first time in any system, we have made a large-scale cluster state whose structure enables universal quantum computation.” Says Dr Hidehiro Yonezawa, Chief Investigator, CQC2T at UNSW Canberra. “Our experiment demonstrates that this design is feasible – and scalable.”
The experiment was an international effort, with the design developed through collaboration by Dr Menicucci at RMIT, Dr Rafael Alexander from the University of New Mexico and UNSW Canberra researchers Dr Hidehiro Yonezawa and Dr Shota Yokoyama. A team of experimentalists at the University of Tokyo, led by Professor Akira Furusawa, performed the ground-breaking experiment.
Here’s a link to and a citation for the paper,
Generation of time-domain-multiplexed two-dimensional cluster state by Warit Asavanant, Yu Shiozawa, Shota Yokoyama, Baramee Charoensombutamon, Hiroki Emura, Rafael N. Alexander, Shuntaro Takeda, Jun-ichi Yoshikawa, Nicolas C. Menicucci, Hidehiro Yonezawa, Akira Furusawa. Science 18 Oct 2019: Vol. 366, Issue 6463, pp. 373-376 DOI: 10.1126/science.aay2645
No brain but it learns, it has about 720 sexes, and it travels at a rate of approximately 4 cm (1.6 inches) per hour, it is known as ‘le blob’. Fascinated when I first stumbled across the news, I had to post this piece but wish I hadn’t waited so long.
Here’s the 101: the 900-odd species of slime mould, of which P. polycephalum is just one, are a taxonomic headache. They’re currently boxed into the Protista kingdom, because where else are you going to put something that isn’t a fungus, plant, bacteria, or animal?
When life is good, they tend to live solitary lives as single cells like amoeba.
On occasion they squish together, forming a wide, branching structure called a plasmodium that can cover several square metres as they search cities to conquer. Well, bacteria to digest at least.
If you thought your experience on Tinder was hard, dating for slime moulds is a nightmare. Cells can only mix-and-match their genetic material if each has a compatible set of genes called matA, mat B, and mat C, each with up to 16 variations.
But the truly fascinating part is their ability to sense and rapidly adapt to their environment – a behaviour we might, for lack of a better word, call learning.
It isn’t an animal, a plant, or a fungus. The slime mold (Physarum polycephalum) is a strange, creeping, bloblike organism made up of one giant cell. Though it has no brain, it can learn from experience, as biologists at the Research Centre on Animal Cognition (CNRS, Université Toulouse III — Paul Sabatier) previously demonstrated. Now the same team of scientists has gone a step further, proving that a slime mold can transmit what it has learned to a fellow slime mold when the two combine. These new findings are published in the December 21, 2016, issue of the Proceedings of the Royal Society B.
Imagine you could temporarily fuse with someone, acquire that person’s knowledge, and then split off to become your separate self again. With slime molds, that really happens! The slime mold — Physarum polycephalum for scientists — is a unicellular organism whose natural habitat is forest litter. But it can also be cultured in a laboratory petri dish. Audrey Dussutour and David Vogel had already trained slime molds to move past repellent but harmless substances (e.g. coffee, quinine, or salt) to reach their food. They now reveal that a slime mold that has learned to ignore salt can transmit this acquired behavior to another simply by fusing with it.
To achieve this, the researchers taught more than 2,000 slime molds that salt posed no threat. In order to reach their food, these slime molds had to cross a bridge covered with salt. This experience made them habituated slime molds. Meanwhile, another 2,000 slime molds had to cross a bridge bare of any substance. They made up the group of naive slime molds. After this training period, the scientists grouped slime molds into habituated, naive, and mixed pairs. Paired slime molds fused together where they came into contact. The new, fused slime molds then had to cross salt-covered bridges. To the researchers’ surprise, the mixed slime molds moved just as fast as habituated pairs, and much faster than naive ones, suggesting that knowledge of the harmless nature of salt had been shared. This held true for slime molds formed from 3 or 4 individuals. No matter how many fused, only 1 habituated slime mold was needed to transfer the information.
To check that transfer had indeed taken place, the scientists separated the slime molds 1 hour and 3 hours after fusion and repeated the bridge experiment. Only naive slime molds that had been fused with habituated slime molds for 3 hours ignored the salt; all others were repulsed by it. This was proof of learning. When viewing the slime molds through a microscope, the scientists noticed that, after 3 hours, a vein formed at the point of fusion. This vein is undoubtedly the channel through which information is shared. The next challenges facing the researchers are to elucidate the form this information takes, and to test whether more than one behavior can be transmitted simultaneously. If Slime Mold A learns how to ignore quinine and Slime Mold B to ignore salt, the biologists wonder whether both behaviors can be transmitted and retained through fusion.
Here’s a link to and a citation for the paper published in 2016,
Le blob est un organisme unicellulaire complexe mais dépourvu de système nerveux. Celui-ci est capable d’emmagasiner une connaissance et de la transmettre à ses congénères mais la manière dont il procède demeurait un mystère. Des chercheuses et chercheurs du Centre de recherches sur la cognition animale (CNRS/UT3 Paul Sabatier)* viennent de montrer que le blob apprend à tolérer une substance en l’absorbant.
Cette découverte découle d’une observation : les blobs s’échangent de l’information seulement lorsque leurs réseaux veineux fusionnent. Dans ce cas-là, la connaissance circule-t-elle au travers de ces veines ? Dès lors, la substance à laquelle le blob s’habitue constitue-t-elle le support de sa « mémoire » ?
Dans un premier temps l’équipe de scientifiques a entrainé des blobs à traverser des environnements salés pendant six jours dans le but de les habituer au sel. Par la suite, elle a évalué la concentration en sel au sein de ces blobs : ceux-ci en contenaient dix fois plus que les blobs « naïfs ». Les chercheurs les ont alors placés dans un environnement neutre et ont observé qu’ils excrétaient le sel qu’ils contenaient au bout de deux jours, perdant de fait « la mémoire ». Cette expérience semblait donc indiquer un lien entre la concentration de sel au sein de l’organisme et la « mémoire » de l’apprentissage.
Pour aller plus loin et confirmer cette hypothèse, les scientifiques ont introduit dans des blobs naïfs la « mémoire » de l’habituation au sel en en injectant directement dans leurs organismes. Deux heures après, les blobs ne se comportaient plus comme des naïfs mais comme des blobs ayant subi un entrainement de six jours.
Lorsque les conditions environnementales se détériorent, les blobs sont capables d’entrer dans un état de dormance. Les chercheurs ont démontré qu’un mois après être entrés dans cet état, les blobs conservaient leur habituation au sel. Les blobs stockent en effet le sel absorbé pendant la phase de dormance et conservent ainsi la connaissance sur le long terme.
Les résultats de cette étude prouvent que la substance aversive pourrait constituer le support de la « mémoire » du blob. Les chercheurs essayent maintenant de comprendre si le blob peut mémoriser plusieurs substances aversives en même temps et dans quelle mesure il est capable de s’y habituer.
* Le Centre de recherche sur la cognition animale fait partie du Centre de biologie intégrative (CNRS/UT3 Paul Sabatier)
Here’s the abstract for the paper (the link and citation follow afterward),
Learning and memory are indisputably key features of animal success. Using information about past experiences is critical for optimal decision-making in a fluctuating environment. Those abilities are usually believed to be limited to organisms with a nervous system, precluding their existence in non-neural organisms. However, recent studies showed that the slime mould Physarum polycephalum, despite being unicellular, displays habituation, a simple form of learning. In this paper, we studied the possible substrate of both short- and long-term habituation in slime moulds. We habituated slime moulds to sodium, a known repellent, using a 6 day training and turned them into a dormant state named sclerotia. Those slime moulds were then revived and tested for habituation. We showed that information acquired during the training was preserved through the dormant stage as slime moulds still showed habituation after a one-month dormancy period. Chemical analyses indicated a continuous uptake of sodium during the process of habituation and showed that sodium was retained throughout the dormant stage. Lastly, we showed that memory inception via constrained absorption of sodium for 2 h elicited habituation. Our results suggest that slime moulds absorbed the repellent and used it as a ‘circulating memory’.
This article is part of the theme issue ‘Liquid brains, solid brains: How distributed cognitive architectures process information’.
Here’s the link and the citation for the 2019 paper,
Should you ever wish to find ‘le blob’, the Paris Zoological Park, known as the parc zoologique de Paris, is one of four establishments which comprise the totality of the Muséum national d’histoire naturelle in Paris. There are others outside Paris. (You can find more in the Muséum’s Wikipedia entry but it is in French.)
Clustered regularly interspaced short palindromic repeats (CRISPR) does not and never has made much sense to me. I understand each word individually it’s just that I’ve never thought they made much sense strung together that way. It’s taken years but I’ve finally found out what the words (when strung together that way) mean and the origins for the phrase. Hint: it’s all about the phages.
Apparently, it all started with yogurt as Cynthia Graber and Nicola Twilley of Gastropod discuss on their podcast, “4 CRISPR experts on how gene editing is changing the future of food.” During the course of the podcast they explain the ‘phraseology’ issue, mention hornless cattle (I have an update to the information in the podcast later in this posting), and so much more.
CRISPR started with yogurt
You’ll find the podcast (almost 50 minutes long) here on an Oct. 11, 2019 posting on the Genetic Literacy Project. If you need a little more encouragement, here’s how the podcast is described,
To understand how CRISPR will transform our food, we begin our episode at Dupont’s yoghurt culture facility in Madison, Wisconsin. Senior scientist Dennis Romero tells us the story of CRISPR’s accidental discovery—and its undercover but ubiquitous presence in the dairy aisles today.
Jennifer Kuzma and Yiping Qi help us understand the technology’s potential, both good and bad, as well as how it might be regulated and labeled. And Joyce Van Eck, a plant geneticist at the Boyce Thompson Institute in Ithaca, New York, tells us the story of how she is using CRISPR, combined with her understanding of tomato genetics, to fast-track the domestication of one of the Americas’ most delicious orphan crops [groundcherries].
I featured Van Eck’s work with groundcherries last year in a November 28, 2018 posting and I don’t think she’s published any new work about the fruit since. As for Kuzma’s point that there should be more transparency where genetically modified food is concerned, Canadian consumers were surprised (shocked) in 2017 to find out that genetically modified Atlantic salmon had been introduced into the food market without any notification (my September 13, 2017 posting; scroll down to the Fish subheading; Note: The WordPress ‘updated version from Hell’ has affected some of the formatting on the page).
The earliest article on CRISPR and yogurt that I’ve found is a January 1, 2015 article by Kerry Grens for The Scientist,
Two years ago, a genome-editing tool referred to as CRISPR (clustered regularly interspaced short palindromic repeats) burst onto the scene and swept through laboratories faster than you can say “adaptive immunity.” Bacteria and archaea evolved CRISPR eons before clever researchers harnessed the system to make very precise changes to pretty much any sequence in just about any genome.
But life scientists weren’t the first to get hip to CRISPR’s potential. For nearly a decade, cheese and yogurt makers have been relying on CRISPR to produce starter cultures that are better able to fend off bacteriophage attacks. “It’s a very efficient way to get rid of viruses for bacteria,” says Martin Kullen, the global R&D technology leader of Health and Protection at DuPont Nutrition & Health. “CRISPR’s been an important part of our solution to avoid food waste.”
Phage infection of starter cultures is a widespread and significant problem in the dairy-product business, one that’s been around as long as people have been making cheese. Patrick Derkx, senior director of innovation at Denmark-based Chr. Hansen, one of the world’s largest culture suppliers, estimates that the quality of about two percent of cheese production worldwide suffers from phage attacks. Infection can also slow the acidification of milk starter cultures, thereby reducing creameries’ capacity by up to about 10 percent, Derkx estimates. In the early 2000s, Philippe Horvath and Rodolphe Barrangou of Danisco (later acquired by DuPont) and their colleagues were first introduced to CRISPR while sequencing Streptococcus thermophilus, a workhorse of yogurt and cheese production. Initially, says Barrangou, they had no idea of the purpose of the CRISPR sequences. But as his group sequenced different strains of the bacteria, they began to realize that CRISPR might be related to phage infection and subsequent immune defense. “That was an eye-opening moment when we first thought of the link between CRISPR sequencing content and phage resistance,” says Barrangou, who joined the faculty of North Carolina State University in 2013.
One last bit before getting to the hornless cattle, scientist Yi Li has a November 15, 2018 posting on the GLP website about his work with gene editing and food crops,
I’m a plant geneticist and one of my top priorities is developing tools to engineer woody plants such as citrus trees that can resist the greening disease, Huanglongbing (HLB), which has devastated these trees around the world. First detected in Florida in 2005, the disease has decimated the state’s US$9 billion citrus crop, leading to a 75 percent decline in its orange production in 2017. Because citrus trees take five to 10 years before they produce fruits, our new technique – which has been nominated by many editors-in-chief as one of the groundbreaking approaches of 2017 that has the potential to change the world – may accelerate the development of non-GMO citrus trees that are HLB-resistant.
Genetically modified vs. gene edited
You may wonder why the plants we create with our new DNA editing technique are not considered GMO? It’s a good question.
Genetically modified refers to plants and animals that have been altered in a way that wouldn’t have arisen naturally through evolution. A very obvious example of this involves transferring a gene from one species to another to endow the organism with a new trait – like pest resistance or drought tolerance.
But in our work, we are not cutting and pasting genes from animals or bacteria into plants. We are using genome editing technologies to introduce new plant traits by directly rewriting the plants’ genetic code.
This is faster and more precise than conventional breeding, is less controversial than GMO techniques, and can shave years or even decades off the time it takes to develop new crop varieties for farmers.
There is also another incentive to opt for using gene editing to create designer crops. On March 28, 2018, U.S. Secretary of Agriculture Sonny Perdue announced that the USDA wouldn’t regulate new plant varieties developed with new technologies like genome editing that would yield plants indistinguishable from those developed through traditional breeding methods. By contrast, a plant that includes a gene or genes from another organism, such as bacteria, is considered a GMO. This is another reason why many researchers and companies prefer using CRISPR in agriculture whenever it is possible.
As the Gatropod’casters note, there’s more than one side to the gene editing story and not everyone is comfortable with the notion of cavalierly changing genetic codes when so much is still unknown.
For the past two years, researchers at the University of California, Davis, have been studying six offspring of a dairy bull, genome-edited to prevent it from growing horns. This technology has been proposed as an alternative to dehorning, a common management practice performed to protect other cattle and human handlers from injuries.
UC Davis scientists have just published their findings in the journal Nature Biotechnology. They report that none of the bull’s offspring developed horns, as expected, and blood work and physical exams of the calves found they were all healthy. The researchers also sequenced the genomes of the calves and their parents and analyzed these genomic sequences, looking for any unexpected changes.
All data were shared with the U.S. Food and Drug Administration. Analysis by FDA scientists revealed a fragment of bacterial DNA, used to deliver the hornless trait to the bull, had integrated alongside one of the two hornless genetic variants, or alleles, that were generated by genome-editing in the bull. UC Davis researchers further validated this finding.
“Our study found that two calves inherited the naturally-occurring hornless allele and four calves additionally inherited a fragment of bacterial DNA, known as a plasmid,” said corresponding author Alison Van Eenennaam, with the UC Davis Department of Animal Science.
Plasmid integration can be addressed by screening and selection, in this case, selecting the two offspring of the genome-edited hornless bull that inherited only the naturally occurring allele.
“This type of screening is routinely done in plant breeding where genome editing frequently involves a step that includes a plasmid integration,” said Van Eenennaam.
Van Eenennaam said the plasmid does not harm the animals, but the integration technically made the genome-edited bull a GMO, because it contained foreign DNA from another species, in this case a bacterial plasmid.
“We’ve demonstrated that healthy hornless calves with only the intended edit can be produced, and we provided data to help inform the process for evaluating genome-edited animals,” said Van Eenennaam. “Our data indicates the need to screen for plasmid integration when they’re used in the editing process.”
Since the original work in 2013, initiated by the Minnesota-based company Recombinetics, new methods have been developed that no longer use donor template plasmid or other extraneous DNA sequence to bring about introgression of the hornless allele.
Scientists did not observe any other unintended genomic alterations in the calves, and all animals remained healthy during the study period. Neither the bull, nor the calves, entered the food supply as per FDA guidance for genome-edited livestock.
WHY THE NEED FOR HORNLESS COWS?
Many dairy breeds naturally grow horns. But on dairy farms, the horns are typically removed, or the calves “disbudded” at a young age. Animals that don’t have horns are less likely to harm animals or dairy workers and have fewer aggressive behaviors. The dehorning process is unpleasant and has implications for animal welfare. Van Eenennaam said genome-editing offers a pain-free genetic alternative to removing horns by introducing a naturally occurring genetic variant, or allele, that is present in some breeds of beef cattle such as Angus.
I’ve never seen an educational institution use a somewhat vulgar slang term such as ‘puke’ before. Especially not in a news release. You’ll find that elsewhere online ‘puke’ has been replaced, in the headline, with the more socially acceptable ‘vomit’.
CRISPRed fruit flies mimic monarch butterfly — and could make you puke Scientists recreate in flies the mutations that let monarch butterfly eat toxic milkweed with impunity
University of California – Berkeley
The fruit flies in Noah Whiteman’s lab may be hazardous to your health.
Whiteman and his University of California, Berkeley, colleagues have turned perfectly palatable fruit flies — palatable, at least, to frogs and birds — into potentially poisonous prey that may cause anything that eats them to puke. In large enough quantities, the flies likely would make a human puke, too, much like the emetic effect of ipecac syrup.
That’s because the team genetically engineered the flies, using CRISPR-Cas9 gene editing, to be able to eat milkweed without dying and to sequester its toxins, just as America’s most beloved butterfly, the monarch, does to deter predators.
This is the first time anyone has recreated in a multicellular organism a set of evolutionary mutations leading to a totally new adaptation to the environment — in this case, a new diet and new way of deterring predators.
Like monarch caterpillars, the CRISPRed fruit fly maggots thrive on milkweed, which contains toxins that kill most other animals, humans included. The maggots store the toxins in their bodies and retain them through metamorphosis, after they turn into adult flies, which means the adult “monarch flies” could also make animals upchuck.
The team achieved this feat by making three CRISPR edits in a single gene: modifications identical to the genetic mutations that allow monarch butterflies to dine on milkweed and sequester its poison. These mutations in the monarch have allowed it to eat common poisonous plants other insects could not and are key to the butterfly’s thriving presence throughout North and Central America.
Flies with the triple genetic mutation proved to be 1,000 times less sensitive to milkweed toxin than the wild fruit fly, Drosophila melanogaster.
Whiteman and his colleagues will describe their experiment in the Oct. 2  issue of the journal Nature.
The UC Berkeley researchers created these monarch flies to establish, beyond a shadow of a doubt, which genetic changes in the genome of monarch butterflies were necessary to allow them to eat milkweed with impunity. They found, surprisingly, that only three single-nucleotide substitutions in one gene are sufficient to give fruit flies the same toxin resistance as monarchs.
“All we did was change three sites, and we made these superflies,” said Whiteman, an associate professor of integrative biology. “But to me, the most amazing thing is that we were able to test evolutionary hypotheses in a way that has never been possible outside of cell lines. It would have been difficult to discover this without having the ability to create mutations with CRISPR.”
Whiteman’s team also showed that 20 other insect groups able to eat milkweed and related toxic plants – including moths, beetles, wasps, flies, aphids, a weevil and a true bug, most of which sport the color orange to warn away predators – independently evolved mutations in one, two or three of the same amino acid positions to overcome, to varying degrees, the toxic effects of these plant poisons.
In fact, his team reconstructed the one, two or three mutations that led to each of the four butterfly and moth lineages, each mutation conferring some resistance to the toxin. All three mutations were necessary to make the monarch butterfly the king of milkweed. Resistance to milkweed toxin comes at a cost, however. Monarch flies are not as quick to recover from upsets, such as being shaken — a test known as “bang” sensitivity.
“This shows there is a cost to mutations, in terms of recovery of the nervous system and probably other things we don’t know about,” Whiteman said. “But the benefit of being able to escape a predator is so high … if it’s death or toxins, toxins will win, even if there is a cost.”
Plant vs. insect
Whiteman is interested in the evolutionary battle between plants and parasites and was intrigued by the evolutionary adaptations that allowed the monarch to beat the milkweed’s toxic defense. He also wanted to know whether other insects that are resistant — though all less resistant than the monarch — use similar tricks to disable the toxin.
“Since plants and animals first invaded land 400 million years ago, this coevolutionary arms race is thought to have given rise to a lot of the plant and animal diversity that we see, because most animals are insects, and most insects are herbivorous: they eat plants,” he said.
Milkweeds and a variety of other plants, including foxglove, the source of digitoxin and digoxin, contain related toxins — called cardiac glycosides — that can kill an elephant and any creature with a beating heart. Foxglove’s effect on the heart is the reason that an extract of the plant, in the genus Digitalis, has been used for centuries to treat heart conditions, and why digoxin and digitoxin are used today to treat congestive heart failure.
These plants’ bitterness alone is enough to deter most animals, but a small minority of insects, including the monarch (Danaus plexippus) and its relative, the queen butterfly (Danaus gilippus), have learned to love milkweed and use it to repel predators.
Whiteman noted that the monarch is a tropical lineage that invaded North America after the last ice age, in part enabled by the three mutations that allowed it to eat a poisonous plant other animals could not, giving it a survival edge and a natural defense against predators.
“The monarch resists the toxin the best of all the insects, and it has the biggest population size of any of them; it’s all over the world,” he said.
The new paper reveals that the mutations had to occur in the right sequence, or else the flies would never have survived the three separate mutational events.
Thwarting the sodium pump
The poisons in these plants, most of them a type of cardenolide, interfere with the sodium/potassium pump (Na+/K+-ATPase) that most of the body’s cells use to move sodium ions out and potassium ions in. The pump creates an ion imbalance that the cell uses to its favor. Nerve cells, for example, transmit signals along their elongated cell bodies, or axons, by opening sodium and potassium gates in a wave that moves down the axon, allowing ions to flow in and out to equilibrate the imbalance. After the wave passes, the sodium pump re-establishes the ionic imbalance.
Digitoxin, from foxglove, and ouabain, the main toxin in milkweed, block the pump and prevent the cell from establishing the sodium/potassium gradient. This throws the ion concentration in the cell out of whack, causing all sorts of problems. In animals with hearts, like birds and humans, heart cells begin to beat so strongly that the heart fails; the result is death by cardiac arrest.
Scientists have known for decades how these toxins interact with the sodium pump: they bind the part of the pump protein that sticks out through the cell membrane, clogging the channel. They’ve even identified two specific amino acid changes or mutations in the protein pump that monarchs and the other insects evolved to prevent the toxin from binding.
But Whiteman and his colleagues weren’t satisfied with this just so explanation: that insects coincidentally developed the same two identical mutations in the sodium pump 14 separate times, end of story. With the advent of CRISPR-Cas9 gene editing in 2012, coinvented by UC Berkeley’s Jennifer Doudna, Whiteman and colleagues Anurag Agrawal of Cornell University and Susanne Dobler of the University of Hamburg in Germany applied to the Templeton Foundation for a grant to recreate these mutations in fruit flies and to see if they could make the flies immune to the toxic effects of cardenolides.
Seven years, many failed attempts and one new grant from the National Institutes of Health later, along with the dedicated CRISPR work of GenetiVision of Houston, Texas, they finally achieved their goal. In the process, they discovered a third critical, compensatory mutation in the sodium pump that had to occur before the last and most potent resistance mutation would stick. Without this compensatory mutation, the maggots died.
Their detective work required inserting single, double and triple mutations into the fruit fly’s own sodium pump gene, in various orders, to assess which ones were necessary. Insects having only one of the two known amino acid changes in the sodium pump gene were best at resisting the plant poisons, but they also had serious side effects — nervous system problems — consistent with the fact that sodium pump mutations in humans are often associated with seizures. However, the third, compensatory mutation somehow reduces the negative effects of the other two mutations.
“One substitution that evolved confers weak resistance, but it is always present and allows for substitutions that are going to confer the most resistance,” said postdoctoral fellow Marianna Karageorgi, a geneticist and evolutionary biologist. “This substitution in the insect unlocks the resistance substitutions, reducing the neurological costs of resistance. Because this trait has evolved so many times, we have also shown that this is not random.”
The fact that one compensatory mutation is required before insects with the most resistant mutation could survive placed a constraint on how insects could evolve toxin resistance, explaining why all 21 lineages converged on the same solution, Whiteman said. In other situations, such as where the protein involved is not so critical to survival, animals might find different solutions.
“This helps answer the question, ‘Why does convergence evolve sometimes, but not other times?'” Whiteman said. “Maybe the constraints vary. That’s a simple answer, but if you think about it, these three mutations turned a Drosophila protein into a monarch one, with respect to cardenolide resistance. That’s kind of remarkable.”
The research was funded by the Templeton Foundation and the National Institutes of Health. Co-authors with Whiteman and Agrawal are co-first authors Marianthi Karageorgi of UC Berkeley and Simon Groen, now at New York University; Fidan Sumbul and Felix Rico of Aix-Marseille Université in France; Julianne Pelaez, Kirsten Verster, Jessica Aguilar, Susan Bernstein, Teruyuki Matsunaga and Michael Astourian of UC Berkeley; Amy Hastings of Cornell; and Susanne Dobler of Universität Hamburg in Germany.
Robert Sanders’ Oct. 2, 2019′ news release for the University of California at Berkeley (it’s also been republished as an Oct. 2, 2019 news item on ScienceDaily) has had its headline changed to ‘vomit’ but you’ll find the more vulgar word remains in two locations of the second paragraph of the revised new release.
If you have time, go to the news release on the University of California at Berkeley website just to admire the images that have been embedded in the news release. Here’s one,
Here’s a link to and a citation for the paper,
Genome editing retraces the evolution of toxin resistance in the monarch butterfly by Marianthi Karageorgi, Simon C. Groen, Fidan Sumbul, Julianne N. Pelaez, Kirsten I. Verster, Jessica M. Aguilar, Amy P. Hastings, Susan L. Bernstein, Teruyuki Matsunaga, Michael Astourian, Geno Guerra, Felix Rico, Susanne Dobler, Anurag A. Agrawal & Noah K. Whiteman. Nature (2019) DOI: https://doi.org/10.1038/s41586-019-1610-8 Published 02 October 2019
This paper is behind a paywall.
Words about a word
I’m glad they changed the headline and substituted vomit for puke. I think we need vulgar and/or taboo words to release anger or disgust or other difficult emotions. Incorporating those words into standard language deprives them of that power.
The last word: Genetivision
The company mentioned in the new release, Genetivision, is the place to go for transgenic flies. Here’s a sampling from the their Testimonials webpage,
“GenetiVision‘s service has been excellent in the quality and price. The timeliness of its international service has been a big plus. We are very happy with its consistent service and the flies it generates.” Kwang-Wook Choi, Ph.D. Department of Biological Sciences Korea Advanced Institute of Science and Technology
“We couldn’t be happier with GenetiVision. Great prices on both standard P and PhiC31 transgenics, quick turnaround time, and we’re still batting 1000 with transformant success. We used to do our own injections but your service makes it both faster and more cost-effective. Thanks for your service!” Thomas Neufeld, Ph.D. Department of Genetics, Cell Biology and Development University of Minnesota
An August 29, 2019 news item on phys.org broke the news about breaking a record for transferring quantum entanglement between matter and light ,
The quantum internet promises absolutely tap-proof communication and powerful distributed sensor networks for new science and technology. However, because quantum information cannot be copied, it is not possible to send this information over a classical network. Quantum information must be transmitted by quantum particles, and special interfaces are required for this. The Innsbruck-based experimental physicist Ben Lanyon, who was awarded the Austrian START Prize in 2015 for his research, is investigating these important intersections of a future quantum Internet.
Now his team at the Department of Experimental Physics at the University of Innsbruck and at the Institute of Quantum Optics and Quantum Information of the Austrian Academy of Sciences has achieved a record for the transfer of quantum entanglement between matter and light. For the first time, a distance of 50 kilometers was covered using fiber optic cables. “This is two orders of magnitude further than was previously possible and is a practical distance to start building inter-city quantum networks,” says Ben Lanyon.
Lanyon’s team started the experiment with a calcium atom trapped in an ion trap. Using laser beams, the researchers write a quantum state onto the ion and simultaneously excite it to emit a photon in which quantum information is stored. As a result, the quantum states of the atom and the light particle are entangled. But the challenge is to transmit the photon over fiber optic cables. “The photon emitted by the calcium ion has a wavelength of 854 nanometers and is quickly absorbed by the optical fiber”, says Ben Lanyon. His team therefore initially sends the light particle through a nonlinear crystal illuminated by a strong laser. Thereby the photon wavelength is converted to the optimal value for long-distance travel: the current telecommunications standard wavelength of 1550 nanometers. The researchers from Innsbruck then send this photon through a 50-kilometer-long optical fiber line. Their measurements show that atom and light particle are still entangled even after the wavelength conversion and this long journey.
Even greater distances in sight
As a next step, Lanyon and his team show that their methods would enable entanglement to be generated between ions 100 kilometers apart and more. Two nodes send each an entangled photon over a distance of 50 kilometers to an intersection where the light particles are measured in such a way that they lose their entanglement with the ions, which in turn would entangle them. With 100-kilometer node spacing now a possibility, one could therefore envisage building the world’s first intercity light-matter quantum network in the coming years: only a handful of trapped ion-systems would be required on the way to establish a quantum internet between Innsbruck and Vienna, for example.
Lanyon’s team is part of the Quantum Internet Alliance, an international project within the Quantum Flagship framework of the European Union. The current results have been published in the Nature journal Quantum Information. Financially supported was the research among others by the Austrian Science Fund FWF and the European Union.
Here’s a link to and a citation for the paper,
Light-matter entanglement over 50 km of optical fibre by V. Krutyanskiy, M. Meraner, J. Schupp, V. Krcmarsky, H. Hainzer & B. P. Lanyon. npj Quantum Information volume 5, Article number: 72 (2019) DOI: https://doi.org/10.1038/s41534-019-0186-3 Published: 27 August 2019