Tag Archives: Stanford University

Organic neuromorphic electronics

A December 13, 2021 news item on ScienceDaily describes some research from Germany’s Max Planck Institute for Polymer Research,

The human brain works differently from a computer – while the brain works with biological cells and electrical impulses, a computer uses silicon-based transistors. Scientists have equipped a toy robot with a smart and adaptive electrical circuit made of soft organic materials, similarly to the biological matter. With this bio-inspired approach, they were able to teach the robot to navigate independently through a maze using visual signs for guidance.

A December 13, 2021 Max Planck Institute for Polymer Research press release (also on EurekAlert), which originated the news item, fills in a few details,

The processor is the brain of a computer – an often-quoted phrase. But processors work fundamentally differently than the human brain. Transistors perform logic operations by means of electronic signals. In contrast, the brain works with nerve cells, so-called neurons, which are connected via biological conductive paths, so-called synapses. At a higher level, this signaling is used by the brain to control the body and perceive the surrounding environment. The reaction of the body/brain system when certain stimuli are perceived – for example, via the eyes, ears or sense of touch – is triggered through a learning process. For example, children learn not to reach twice for a hot stove: one input stimulus leads to a learning process with a clear behavioral outcome.

Scientists working with Paschalis Gkoupidenis, group leader in Paul Blom’s department at the Max Planck Institute for Polymer Research, have now applied this basic principle of learning through experience in a simplified form and steered a robot through a maze using a so-called organic neuromorphic circuit. The work was an extensive collaboration between the Universities of Eindhoven [Eindhoven University of Technology; Netherlands], Stanford [University; California, US], Brescia [University; Italy], Oxford [UK] and KAUST [King Abdullah University of Science and Technology, Saudi Arabia].

“We wanted to use this simple setup to show how powerful such ‘organic neuromorphic devices’ can be in real-world conditions,” says Imke Krauhausen, a doctoral student in Gkoupidenis’ group and at TU Eindhoven (van de Burgt group), and first author of the scientific paper.

To achieve the navigation of the robot inside the maze, the researchers fed the smart adaptive circuit with sensory signals coming from the environment. The path of maze towards the exit is indicated visually at each maze intersects. Initially, the robot often misinterprets the visual signs, thus it makes the wrong “turning” decisions at the maze intersects and loses the way out. When the robot takes these decisions and follows wrong dead-end paths, it is being discouraged to take these wrong decisions by receiving corrective stimuli. The corrective stimuli, for example when the robot hits a wall, are directly applied at the organic circuit via electrical signals induced by a touch sensor attached to the robot. With each subsequent execution of the experiment, the robot gradually learns to make the right “turning” decisions at the intersects, i. e. to avoid receiving corrective stimuli, and after a few trials it finds the way out of the maze. This learning process happens exclusively on the organic adaptive circuit. 

“We were really glad to see that the robot can pass through the maze after some runs by learning on a simple organic circuit. We have shown here a first, very simple setup. In the distant future, however, we hope that organic neuromorphic devices could also be used for local and distributed computing/learning. This will open up entirely new possibilities for applications in real-world robotics, human-machine interfaces and point-of-care diagnostics. Novel platforms for rapid prototyping and education, at the intersection of materials science and robotics, are also expected to emerge.” Gkoupidenis says.

Here’s a link to and a citation for the paper,

Organic neuromorphic electronics for sensorimotor integration and learning in robotics by Imke Krauhausen, Dimitrios A. Koutsouras, Armantas Melianas, Scott T. Keene, Katharina Lieberth, Hadrien Ledanseur, Rajendar Sheelamanthula, Alexander Giovannitti, Fabrizio Torricelli, Iain Mcculloch, Paul W. M. Blom, Alberto Salleo, Yoeri van de Burgt and Paschalis Gkoupidenis. Science Advances • 10 Dec 2021 • Vol 7, Issue 50 • DOI: 10.1126/sciadv.abl5068

This paper is open access.

Going blind when your neural implant company flirts with bankruptcy (long read)

This story got me to thinking about what happens when any kind of implant company (pacemaker, deep brain stimulator, etc.) goes bankrupt or is acquired by another company with a different business model.

As I worked on this piece, more issues were raised and the scope expanded to include prosthetics along with implants while the focus narrowed to neuro as in, neural implants and neuroprosthetics. At the same time, I found salient examples for this posting in other medical advances such as gene editing.

In sum, all references to implants and prosthetics are to neural devices and some issues are illustrated with salient examples from other medical advances (specifically, gene editing).

Definitions (for those who find them useful)

The US Food and Drug Administration defines implants and prosthetics,

Medical implants are devices or tissues that are placed inside or on the surface of the body. Many implants are prosthetics, intended to replace missing body parts. Other implants deliver medication, monitor body functions, or provide support to organs and tissues.

As for what constitutes a neural implant/neuroprosthetic, there’s this from Emily Waltz’s January 20, 2020 article (How Do Neural Implants Work? Neural implants are used for deep brain stimulation, vagus nerve stimulation, and mind-controlled prostheses) for the Institute of Electrical and Electronics Engineers (IEEE) Spectrum magazine,

A neural implant, then, is a device—typically an electrode of some kind—that’s inserted into the body, comes into contact with tissues that contain neurons, and interacts with those neurons in some way.

Now, let’s start with the recent near bankruptcy of a retinal implant company.

The company goes bust (more or less)

From a February 25, 2022 Science Friday (a National Public Radio program) posting/audio file, Note: Links have been removed,

Barbara Campbell was walking through a New York City subway station during rush hour when her world abruptly went dark. For four years, Campbell had been using a high-tech implant in her left eye that gave her a crude kind of bionic vision, partially compensating for the genetic disease that had rendered her completely blind in her 30s. “I remember exactly where I was: I was switching from the 6 train to the F train,” Campbell tells IEEE Spectrum. “I was about to go down the stairs, and all of a sudden I heard a little ‘beep, beep, beep’ sound.’”

It wasn’t her phone battery running out. It was her Argus II retinal implant system powering down. The patches of light and dark that she’d been able to see with the implant’s help vanished.

Terry Byland is the only person to have received this kind of implant in both eyes. He got the first-generation Argus I implant, made by the company Second Sight Medical Products, in his right eye in 2004, and the subsequent Argus II implant in his left 11 years later. He helped the company test the technology, spoke to the press movingly about his experiences, and even met Stevie Wonder at a conference. “[I] went from being just a person that was doing the testing to being a spokesman,” he remembers.

Yet in 2020, Byland had to find out secondhand that the company had abandoned the technology and was on the verge of going bankrupt. While his two-implant system is still working, he doesn’t know how long that will be the case. “As long as nothing goes wrong, I’m fine,” he says. “But if something does go wrong with it, well, I’m screwed. Because there’s no way of getting it fixed.”

Science Friday and the IEEE [Institute of Electrical and Electronics Engineers] Spectrum magazine collaborated to produce this story. You’ll find the audio files and the transcript of interviews with the authors and one of the implant patients in this February 25, 2022 Science Friday (a National Public Radio program) posting.

Here’s more from the February 15, 2022 IEEE Spectrum article by Eliza Strickland and Mark Harris,

Ross Doerr, another Second Sight patient, doesn’t mince words: “It is fantastic technology and a lousy company,” he says. He received an implant in one eye in 2019 and remembers seeing the shining lights of Christmas trees that holiday season. He was thrilled to learn in early 2020 that he was eligible for software upgrades that could further improve his vision. Yet in the early months of the COVID-19 pandemic, he heard troubling rumors about the company and called his Second Sight vision-rehab therapist. “She said, ‘Well, funny you should call. We all just got laid off,’ ” he remembers. She said, ‘By the way, you’re not getting your upgrades.’ ”

These three patients, and more than 350 other blind people around the world with Second Sight’s implants in their eyes, find themselves in a world in which the technology that transformed their lives is just another obsolete gadget. One technical hiccup, one broken wire, and they lose their artificial vision, possibly forever. To add injury to insult: A defunct Argus system in the eye could cause medical complications or interfere with procedures such as MRI scans, and it could be painful or expensive to remove.

The writers included some information about what happened to the business, from the February 15, 2022 IEEE Spectrum article, Note: Links have been removed,

After Second Sight discontinued its retinal implant in 2019 and nearly went out of business in 2020, a public offering in June 2021 raised US $57.5 million at $5 per share. The company promised to focus on its ongoing clinical trial of a brain implant, called Orion, that also provides artificial vision. But its stock price plunged to around $1.50, and in February 2022, just before this article was published, the company announced a proposed merger with an early-stage biopharmaceutical company called Nano Precision Medical (NPM). None of Second Sight’s executives will be on the leadership team of the new company, which will focus on developing NPM’s novel implant for drug delivery.The company’s current leadership declined to be interviewed for this article but did provide an emailed statement prior to the merger announcement. It said, in part: “We are a recognized global leader in neuromodulation devices for blindness and are committed to developing new technologies to treat the broadest population of sight-impaired individuals.”

It’s unclear what Second Sight’s proposed merger means for Argus patients. The day after the merger was announced, Adam Mendelsohn, CEO of Nano Precision Medical, told Spectrum that he doesn’t yet know what contractual obligations the combined company will have to Argus and Orion patients. But, he says, NPM will try to do what’s “right from an ethical perspective.” The past, he added in an email, is “simply not relevant to the new future.”

There may be some alternatives, from the February 15, 2022 IEEE Spectrum article (Note: Links have been removed),

Second Sight may have given up on its retinal implant, but other companies still see a need—and a market—for bionic vision without brain surgery. Paris-based Pixium Vision is conducting European and U.S. feasibility trials to see if its Prima system can help patients with age-related macular degeneration, a much more common condition than retinitis pigmentosa.

Daniel Palanker, a professor of ophthalmology at Stanford University who licensed his technology to Pixium, says the Prima implant is smaller, simpler, and cheaper than the Argus II. But he argues that Prima’s superior image resolution has the potential to make Pixium Vision a success. “If you provide excellent vision, there will be lots of patients,” he tells Spectrum. “If you provide crappy vision, there will be very few.”

Some clinicians involved in the Argus II work are trying to salvage what they can from the technology. Gislin Dagnelie, an associate professor of ophthalmology at Johns Hopkins University School of Medicine, has set up a network of clinicians who are still working with Argus II patients. The researchers are experimenting with a thermal camera to help users see faces, a stereo camera to filter out the background, and AI-powered object recognition. These upgrades are unlikely to result in commercial hardware today but could help future vision prostheses.

The writers have carefully balanced this piece so it is not an outright condemnation of the companies (Second Sight and Nano Precision), from the February 15, 2022 IEEE Spectrum article,

Failure is an inevitable part of innovation. The Argus II was an innovative technology, and progress made by Second Sight may pave the way for other companies that are developing bionic vision systems. But for people considering such an implant in the future, the cautionary tale of Argus patients left in the lurch may make a tough decision even tougher. Should they take a chance on a novel technology? If they do get an implant and find that it helps them navigate the world, should they allow themselves to depend upon it?

Abandoning the Argus II technology—and the people who use it—might have made short-term financial sense for Second Sight, but it’s a decision that could come back to bite the merged company if it does decide to commercialize a brain implant, believes Doerr.

For anyone curious about retinal implant technology (specifically the Argus II), I have a description in a June 30, 2015 posting.

Speculations and hopes for neuroprosthetics

The field of neuroprosthetics is very active. Dr Arthur Saniotis and Prof Maciej Henneberg have written an article where they speculate about the possibilities of a neuroprosthetic that may one day merge with neurons in a February 21, 2022 Nanowerk Spotlight article,

For over a generation several types of medical neuroprosthetics have been developed, which have improved the lives of thousands of individuals. For instance, cochlear implants have restored functional hearing in individuals with severe hearing impairment.

Further advances in motor neuroprosthetics are attempting to restore motor functions in tetraplegic, limb loss and brain stem stroke paralysis subjects.

Currently, scientists are working on various kinds of brain/machine interfaces [BMI] in order to restore movement and partial sensory function. One such device is the ‘Ipsihand’ that enables movement of a paralyzed hand. The device works by detecting the recipient’s intention in the form of electrical signals, thereby triggering hand movement.

Another recent development is the 12 month BMI gait neurohabilitation program that uses a visual-tactile feedback system in combination with a physical exoskeleton and EEG operated AI actuators while walking. This program has been tried on eight patients with reported improvements in lower limb movement and somatic sensation.

Surgically placed electrode implants have also reduced tremor symptoms in individuals with Parkinson’s disease.

Although neuroprosthetics have provided various benefits they do have their problems. Firstly, electrode implants to the brain are prone to degradation, necessitating new implants after a few years. Secondly, as in any kind of surgery, implanted electrodes can cause post-operative infection and glial scarring. Furthermore, one study showed that the neurobiological efficacy of an implant is dependent on the rate of speed of its insertion.

But what if humans designed a neuroprosthetic, which could bypass the medical glitches of invasive neuroprosthetics? However, instead of connecting devices to neural networks, this neuroprosthetic would directly merge with neurons – a novel step. Such a neuroprosthetic could radically optimize treatments for neurodegenerative disorders and brain injuries, and possibly cognitive enhancement [emphasis mine].

A team of three international scientists has recently designed a nanobased neuroprosthetic, which was published in Frontiers in Neuroscience (“Integration of Nanobots Into Neural Circuits As a Future Therapy for Treating Neurodegenerative Disorders“). [open access paper published in 2018]

An interesting feature of their nanobot neuroprosthetic is that it has been inspired from nature by way of endomyccorhizae – a type of plant/fungus symbiosis, which is over four hundred million years old. During endomyccorhizae, fungi use numerous threadlike projections called mycelium that penetrate plant roots, forming colossal underground networks with nearby root systems. During this process fungi take up vital nutrients while protecting plant roots from infections – a win-win relationship. Consequently, the nano-neuroprosthetic has been named ‘endomyccorhizae ligand interface’, or ‘ELI’ for short.

The Spotlight article goes on to describe how these nanobots might function. As for the possibility of cognitive enhancement, I wonder if that might come to be described as a form of ‘artificial intelligence’.

(Dr Arthur Saniotis and Prof Maciej Henneberg are both from the Department of Anthropology, Ludwik Hirszfeld Institute of Immunology and Experimental Therapy, Polish Academy of Sciences; and Biological Anthropology and Comparative Anatomy Research Unit, Adelaide Medical School, University of Adelaide. Abdul-Rahman Sawalma who’s listed as an author on the 2018 paper is from the Palestinian Neuroscience Initiative, Al-Quds University, Beit Hanina, Palestine.)

Saniotis and Henneberg’s Spotlight article presents an optimistic view of neuroprosthetics. It seems telling that they cite cochlear implants as a success story when it is viewed by many as ethically fraught (see the Cochlear implant Wikipedia entry; scroll down to ‘Criticism and controversy’).

Ethics and your implants

This is from an April 6, 2015 article by Luc Henry on technologist.eu,

Technologist: What are the potential consequences of accepting the “augmented human” in society?

Gregor Wolbring: There are many that we might not even envision now. But let me focus on failure and obsolescence [emphasis mine], two issues that are rarely discussed. What happens when the mechanisms fails in the middle of an action? Failure has hazardous consequences, but obsolescence has psychological ones. …. The constant surgical inter­vention needed to update the hardware may not be feasible. A person might feel obsolete if she cohabits with others using a newer version.

T. Are researchers working on prosthetics sometimes disconnected from reality?

G. W. Students engaged in the development of prosthetics have to learn how to think in societal terms and develop a broader perspective. Our education system provides them with a fascination for clever solutions to technological challenges but not with tools aiming at understanding the consequences, such as whether their product might increase or decrease social justice.

Wolbring is a professor at the University of Calgary’s Cumming School of Medicine (profile page) who writes on social issues to do with human enhancement/ augmentation. As well,

Some of his areas of engagement are: ability studies including governance of ability expectations, disability studies, governance of emerging and existing sciences and technologies (e.g. nanoscale science and technology, molecular manufacturing, aging, longevity and immortality, cognitive sciences, neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors), impact of science and technology on marginalized populations, especially people with disabilities he governance of bodily enhancement, sustainability issues, EcoHealth, resilience, ethics issues, health policy issues, human rights and sport.

He also maintains his own website here.

Not just startups

I’d classify Second Sight as a tech startup company and they have a high rate of failure, which may not have been clear to the patients who had the implants. Clinical trials can present problems too as this excerpt from my September 17, 2020 posting notes,

This October 31, 2017 article by Emily Underwood for Science was revelatory,

“In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.

This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.

… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.”

Symbiosis can be another consequence, as mentioned in my September 17, 2020 posting,

From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,

“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.

“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]

Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence. [emphasis mine]

It’s complicated

For a lot of people these devices are or could be life-changing. At the same time, there are a number of different issues related to implants/prosthetics; the following is not an exhaustive list. As Wolbring notes, issues that we can’t begin to imagine now are likely to emerge as these medical advances become more ubiquitous.

Ability/disability?

Assistive technologies are almost always portrayed as helpful. For example, a cochlear implant gives people without hearing the ability to hear. The assumption is that this is always a good thing—unless you’re a deaf person who wants to define the problem a little differently. Who gets to decide what is good and ‘normal’ and what is desirable?

While the cochlear implant is the most extreme example I can think of, there are variations of these questions throughout the ‘disability’ communities.

Also, as Wolbring notes in his interview with the Technologist.eu, the education system tends to favour technological solutions which don’t take social issues into account. Wolbring cites social justice issues when he mentions failure and obsolescence.

Technical failures and obsolescence

The story, excerpted earlier in this posting, opened with a striking example of a technical failure at an awkward moment; a blind woman depending on her retinal implant loses all sight as she maneuvers through a subway station in New York City.

Aside from being an awful way to find out the company supplying and supporting your implant is in serious financial trouble and can’t offer assistance or repair, the failure offers a preview of what could happen as implants and prosthetics become more commonly used.

Keeping up/fomo (fear of missing out)/obsolescence

It used to be called ‘keeping up with the Joneses, it’s the practice of comparing yourself and your worldly goods to someone else(‘s) and then trying to equal what they have or do better. Usually, people want to have more and better than the mythical Joneses.

These days, the phenomenon (which has been expanded to include social networking) is better known as ‘fomo’ or fear of missing out (see the Fear of missing out Wikipedia entry).

Whatever you want to call it, humanity’s competitive nature can be seen where technology is concerned. When I worked in technology companies, I noticed that hardware and software were sometimes purchased for features that were effectively useless to us. But, not upgrading to a newer version was unthinkable.

Call it fomo or ‘keeping up with the Joneses’, it’s a powerful force and when people (and even companies) miss out or can’t keep up, it can lead to a sense of inferiority in the same way that having an obsolete implant or prosthetic could.

Social consequences

Could there be a neural implant/neuroprosthetic divide? There is already a digital divide (from its Wikipedia entry),

The digital divide is a gap between those who have access to new technology and those who do not … people without access to the Internet and other ICTs [information and communication technologies] are at a socio-economic disadvantage because they are unable or less able to find and apply for jobs, shop and sell online, participate democratically, or research and learn.

After reading Wolbring’s comments, it’s not hard to imagine a neural implant/neuroprosthetic divide with its attendant psychological and social consequences.

What kind of human am I?

There are other issues as noted in my September 17, 2020 posting. I’ve already mentioned ‘patient 6’, the woman who developed a symbiotic relationship with her brain/computer interface. This is how the relationship ended,

… He [Frederic Gilbert, ethicist] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.

… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.

“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”

Above human

The possibility that implants will not merely restore or endow someone with ‘standard’ sight or hearing or motion or … but will augment or improve on nature was broached in this May 2, 2013 posting, More than human—a bionic ear that extends hearing beyond the usual frequencies and is one of many in the ‘Human Enhancement’ category on this blog.

More recently, Hugh Herr, an Associate Professor at the Massachusetts Institute of Technology (MIT), leader of the Biomechatronics research group at MIT’s Media Lab, a double amputee, and prosthetic enthusiast, starred in the recent (February 23, 2022) broadcast of ‘Augmented‘ on the Public Broadcasting Service (PBS) science programme, Nova.

I found ‘Augmented’ a little offputting as it gave every indication of being an advertisement for Herr’s work in the form of a hero’s journey. I was not able to watch more than 10 mins. This preview gives you a pretty good idea of what it was like although the part in ‘Augmented, where he says he’d like to be a cyborg hasn’t been included,

At a guess, there were a few talking heads (taking up from 10%-20% of the running time) who provided some cautionary words to counterbalance the enthusiasm in the rest of the programme. It’s a standard approach designed to give the impression that both sides of a question are being recognized. The cautionary material is usually inserted past the 1/2 way mark while leaving several minutes at the end for returning to the more optimistic material.

In a February 2, 2010 posting I have excerpts from an article featuring quotes from Herr that I still find startling,

Written by Paul Hochman for Fast Company, Bionic Legs, iLimbs, and Other Super-Human Prostheses [ETA March 23, 2022: an updated version of the article is now on Genius.com] delves further into the world where people may be willing to trade a healthy limb for a prosthetic. From the article,

There are many advantages to having your leg amputated.

Pedicure costs drop 50% overnight. A pair of socks lasts twice as long. But Hugh Herr, the director of the Biomechatronics Group at the MIT Media Lab, goes a step further. “It’s actually unfair,” Herr says about amputees’ advantages over the able-bodied. “As tech advancements in prosthetics come along, amputees can exploit those improvements. They can get upgrades. A person with a natural body can’t.”

Herr is not the only one who favours prosthetics (also from the Hochman article),

This influx of R&D cash, combined with breakthroughs in materials science and processor speed, has had a striking visual and social result: an emblem of hurt and loss has become a paradigm of the sleek, modern, and powerful. Which is why Michael Bailey, a 24-year-old student in Duluth, Georgia, is looking forward to the day when he can amputate the last two fingers on his left hand.

“I don’t think I would have said this if it had never happened,” says Bailey, referring to the accident that tore off his pinkie, ring, and middle fingers. “But I told Touch Bionics I’d cut the rest of my hand off if I could make all five of my fingers robotic.”

But Bailey is most surprised by his own reaction. “When I’m wearing it, I do feel different: I feel stronger. As weird as that sounds, having a piece of machinery incorporated into your body, as a part of you, well, it makes you feel above human.[emphasis mine] It’s a very powerful thing.”

My September 17, 2020 posting touches on more ethical and social issues including some of those surrounding consumer neurotechnologies or brain-computer interfaces (BCI). Unfortunately, I don’t have space for these issues here.

As for Paul Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, now on Genius.com, it has been updated.

Money makes the world go around

Money and business practices have been indirectly referenced (for the most part) up to now in this posting. The February 15, 2022 IEEE Spectrum article and Hochman’s article, Bionic Legs, iLimbs, and Other Super-Human Prostheses, cover two aspects of the money angle.

In the IEEE Spectrum article, a tech start-up company, Second Sight, ran into financial trouble and is acquired by a company that has no plans to develop Second Sight’s core technology. The people implanted with the Argus II technology have been stranded as were ‘patient 6’ and others participating in the clinical trial described in the July 24, 2019 article by Liam Drew for Nature Outlook: The brain mentioned earlier in this posting.

I don’t know anything about the business bankruptcy mentioned in the Drew article but one of the business problems described in the IEEE Spectrum article suggests that Second Sight was founded before answering a basic question, “What is the market size for this product?”

On 18 July 2019, Second Sight sent Argus patients a letter saying it would be phasing out the retinal implant technology to clear the way for the development of its next-generation brain implant for blindness, Orion, which had begun a clinical trial with six patients the previous year. …

“The leadership at the time didn’t believe they could make [the Argus retinal implant] part of the business profitable,” Greenberg [Robert Greenberg, Second Sight co-founder] says. “I understood the decision, because I think the size of the market turned out to be smaller than we had thought.”

….

The question of whether a medical procedure or medicine can be profitable (or should the question be sufficiently profitable?) was referenced in my April 26, 2019 posting in the context of gene editing and personalized medicine

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

The ‘Glybera’ story in my July 4, 2019 posting (scroll down about 40% of the way) highlights the issue with “recurring revenue versus chronic therapies,”

Kelly Crowe in a November 17, 2018 article for the CBC (Canadian Broadcasting Corporation) news writes about Glybera,

It is one of this country’s great scientific achievements.

“The first drug ever approved that can fix a faulty gene.

It’s called Glybera, and it can treat a painful and potentially deadly genetic disorder with a single dose — a genuine made-in-Canada medical breakthrough.

But most Canadians have never heard of it.

Here’s my summary (from the July 4, 2019 posting),

It cost $1M for a single treatment and that single treatment is good for at least 10 years.

Pharmaceutical companies make their money from repeated use of their medicaments and Glybera required only one treatment so the company priced it according to how much they would have gotten for repeated use, $100,000 per year over a 10 year period. The company was not able to persuade governments and/or individuals to pay the cost

In the end, 31 people got the treatment, most of them received it for free through clinical trials.

For rich people only?

Megan Devlin’s March 8, 2022 article for the Daily Hive announces a major research investment into medical research (Note: A link has been removed),

Vancouver [Canada] billionaire Chip Wilson revealed Tuesday [March 8, 2022] that he has a rare genetic condition that causes his muscles to waste away, and announced he’s spending $100 million on research to find a cure.

His condition is called facio-scapulo-humeral muscular dystrophy, or FSHD for short. It progresses rapidly in some people and more slowly in others, but is characterized by progressive muscle weakness starting the the face, the neck, shoulders, and later the lower body.

“I’m out for survival of my own life,” Wilson said.

“I also have the resources to do something about this which affects so many people in the world.”

Wilson hopes the $100 million will produce a cure or muscle-regenerating treatment by 2027.

“This could be one of the biggest discoveries of all time, for humankind,” Wilson said. “Most people lose muscle, they fall, and they die. If we can keep muscle as we age this can be a longevity drug like we’ve never seen before.”

According to rarediseases.org, FSHD affects between four and 10 people out of every 100,000 [emphasis mine], Right now, therapies are limited to exercise and pain management. There is no way to stall or reverse the disease’s course.

Wilson is best known for founding athleisure clothing company Lululemon. He also owns the most expensive home in British Columbia, a $73 million mansion in Vancouver’s Kitsilano neighbourhood.

Let’s see what the numbers add up to,

4 – 10 people out of 100,000

40 – 100 people out of 1M

1200 – 3,000 people out of 30M (let’s say this is Canada’s population)\

12,000 – 30,000 people out of 300M (let’s say this is the US’s population)

42,000 – 105,000 out of 1.115B (let’s say this is China’s population)

The rough total comes to 55,200 to 138,000 people between three countries with a combined population total of 1.445B. Given how business currently operates, it seems unlikely that any company will want to offer Wilson’s hoped for medical therapy although he and possibly others may benefit from a clinical trial.

Should profit or wealth be considerations?

The stories about the patients with the implants and the patients who need Glybera are heartbreaking and point to a question not often asked when medical therapies and medications are developed. Is the profit model the best choice and, if so, how much profit?

I have no answer to that question but I wish it was asked by medical researchers and policy makers.

As for wealthy people dictating the direction for medical research, I don’t have answers there either. I hope the research will yield applications and/or valuable information for more than Wilson’s disease.

It’s his money after all

Wilson calls his new venture, SolveFSHD. It doesn’t seem to be affiliated with any university or biomedical science organization and it’s not clear how the money will be awarded (no programmes, no application procedure, no panel of experts). There are three people on the team, Eva R. Chin, scientist and executive director, Chip Wilson, SolveFSHD founder/funder, and FSHD patient, and Neil Camarta, engineer, executive (fossil fuels and clean energy), and FSHD patient. There’s also a Twitter feed (presumably for the latest updates): https://twitter.com/SOLVEFSHD.

Perhaps unrelated but intriguing is news about a proposed new building in Kenneth Chan’s March 31, 2022 article for the Daily Hive,

Low Tide Properties, the real estate arm of Lululemon founder Chip Wilson [emphasis mine], has submitted a new development permit application to build a 148-ft-tall, eight-storey, mixed-use commercial building in the False Creek Flats of Vancouver.

The proposal, designed by local architectural firm Musson Cattell Mackey Partnership, calls for 236,000 sq ft of total floor area, including 105,000 sq ft of general office space, 102,000 sq ft of laboratory space [emphasis mine], and 5,000 sq ft of ground-level retail space. An outdoor amenity space for building workers will be provided on the rooftop.

[next door] The 2001-built, five-storey building at 1618 Station Street immediately to the west of the development site is also owned by Low Tide Properties [emphasis mine]. The Ferguson, the name of the existing building, contains about 79,000 sq ft of total floor area, including 47,000 sq ft of laboratory space and 32,000 sq ft of general office space. Biotechnology company Stemcell technologies [STEMCELL] Technologies] is the anchor tenant [emphasis mine].

I wonder if this proposed new building will house SolveFSHD and perhaps other FSHD-focused enterprises. The proximity of STEMCELL Technologies could be quite convenient. In any event, $100M will buy a lot (pun intended).

The end

Issues I’ve described here in the context of neural implants/neuroprosthetics and cutting edge medical advances are standard problems not specific to these technologies/treatments:

  • What happens when the technology fails (hopefully not at a critical moment)?
  • What happens when your supplier goes out of business or discontinues the products you purchase from them?
  • How much does it cost?
  • Who can afford the treatment/product? Will it only be for rich people?
  • Will this technology/procedure/etc. exacerbate or create new social tensions between social classes, cultural groups, religious groups, races, etc.?

Of course, having your neural implant fail suddenly in the middle of a New York City subway station seems a substantively different experience than having your car break down on the road.

There are, of course, there are the issues we can’t yet envision (as Wolbring notes) and there are issues such as symbiotic relationships with our implants and/or feeling that you are “above human.” Whether symbiosis and ‘implant/prosthetic superiority’ will affect more than a small number of people or become major issues is still to be determined.

There’s a lot to be optimistic about where new medical research and advances are concerned but I would like to see more thoughtful coverage in the media (e.g., news programmes and documentaries like ‘Augmented’) and more thoughtful comments from medical researchers.

Of course, the biggest issue I’ve raised here is about the current business models for health care products where profit is valued over people’s health and well-being. it’s a big question and I don’t see any definitive answers but the question put me in mind of this quote (from a September 22, 2020 obituary for US Supreme Court Justice Ruth Bader Ginsburg by Irene Monroe for Curve),

Ginsburg’s advocacy for justice was unwavering and showed it, especially with each oral dissent. In another oral dissent, Ginsburg quoted a familiar Martin Luther King Jr. line, adding her coda:” ‘The arc of the universe is long, but it bends toward justice,’” but only “if there is a steadfast commitment to see the task through to completion.” …

Martin Luther King Jr. popularized and paraphrased the quote (from a January 18, 2018 article by Mychal Denzel Smith for Huffington Post),

His use of the quote is best understood by considering his source material. “The arc of the moral universe is long, but it bends toward justice” is King’s clever paraphrasing of a portion of a sermon delivered in 1853 by the abolitionist minister Theodore Parker. Born in Lexington, Massachusetts, in 1810, Parker studied at Harvard Divinity School and eventually became an influential transcendentalist and minister in the Unitarian church. In that sermon, Parker said: “I do not pretend to understand the moral universe. The arc is a long one. My eye reaches but little ways. I cannot calculate the curve and complete the figure by experience of sight. I can divine it by conscience. And from what I see I am sure it bends toward justice.”

I choose to keep faith that people will get the healthcare products they need and that all of us need to keep working at making access more fair.

Of Health Myths and Trickster Viruses; a Who Cares? windup event on Friday, April 1, 2022 (+ more final Who Cares? events)

Toronto’s ArtSci Salon has been hosting a series of events and exhibitions about COVID-19 and other health care issues under the “Who Cares?” banner. The exhibitions and events are now coming to an end (see my February 9, 2022 posting for a full listing).

A March 29, 2022 Art/Sci Salon announcement (received via email) heralds the last roundtable event (see my March 7, 2022 posting for more about the Who Cares? roundtables), Note: This is an online event,

 
Bayo Akomolafe
Seema Yasmin


Of Health Myths and Trickster Viruses

Friday, April 1 [2022], 5:00-7:00 pm [ET]

Des mythes sur la santé et des virus trompeurs

Le Vendredi 1 avril [2022], de 17H à 19H A conversation on the unsettling dimensions of epidemics and the complexities of responses to their challenges.
~
Une conversation sur les dimensions troublantes des épidémies et la complexité des réponses à leurs défis.

Inscrivez- vous ici/Register here

Seema Yasmin,  Director of Research and Education, Stanford Health Communication Initiative. She is an Emmy Award-winning journalist, Pulitzer prize finalist, medical doctor and Stanford and UCLA professor.

Bayo Akomolafe Chief Curator, The Emergence Network.  He is a widely celebrated international speaker, posthumanist thinker, poet, teacher, public intellectual, essayist, and author ~

Seema Yasmin, Director of Research and Education, Stanford Health Communication Initiative. Elle est une journaliste lauréate d’un Emmy Award, finaliste du prix Pulitzer, médecin et professeure à Stanford et UCLA.

Bayo Akomolafe, Chief Curator, The Emergence Network. Il  est un conférencier international très célèbre, un penseur posthumaniste, un poète, un enseignant, un intellectuel public, un essayiste et un auteur.

There are the acknowledgements,

“Who Cares?” is a Speaker Series dedicated to fostering transdisciplinary conversations between doctors, writers, artists, and researchers on contemporary biopolitics of care and the urgent need to move towards more respectful, creative, and inclusive social practices of care in the wake of the systemic cracks made obvious by the pandemic.

We wish to thank/ nous the generous support of the Social Science and Humanities Research Council of Canada, New College at the University of Toronto and The Faculty of Liberal Arts and Professional Studies at York University; the Centre for Feminist Research, Sensorium Centre for Digital Arts and Technology, The Canadian Language Museum, the Departments of English and the School of Gender and Women’s Studies at York University; the D.G. Ivey Library and the Institute for the History and Philosophy of Science and Technology at the University of Toronto; We also wish to thank the support of The Fields Institute for Research in Mathematical Sciences

This series is co-produced in collaboration with the ArtSci Salon

The Who Cares? series webpage, found here, lists the exhibitions and final events,

Exhibitions
March 24 – April 30
[2022]

Alanna Kibbe – TRANSFORM: Exploring Languages of Healing. Opening March 31, 5 pm 
Canadian Language Museum, 2275 Bayview Avenue, York University Glendon Campus

in person. Virtual opening available

Camille Baker INTER/her. Opening April 7 [2022], 4 pm
Ivey Library, 20 Willcox Street, New College, University of Toronto

in person. Virtual opening available

Closing Presentation and Interactive Session
Karolina Żyniewicz – Signs of the time, Collecting
Biological Traces and Memories

Artist talk: April 8 [2022], 4:00-6:00 [ET]
online

Memory Collection: Apr 9 [2022], 2:00-4:00 [ET]

online and in person

Microneedle vaccine patch outperforms needle

Vaccine patch sounds a lot friendlier than ‘needle’ and in the hoopla about vaccine hesitation I have to wonder if the fact that some people don’t like or are deeply fearful of needles is being overlooked.

Perhaps this or some other vaccine patch* will be ready for use in time for the next pandemic. From a September 24, 2021 news item on ScienceDaily,

Scientists at Stanford University and the University of North Carolina [UNC] at Chapel Hill have created a 3D-printed vaccine patch that provides greater protection than a typical vaccine shot.

The trick is applying the vaccine patch directly to the skin, which is full of immune cells that vaccines target.

The resulting immune response from the vaccine patch was 10 times greater than vaccine delivered into an arm muscle with a needle jab, according to a study conducted in animals and published by the team of scientists in the Proceedings of the National Academy of Sciences [PNAS].

A September 23, 2021 University of North Carolina at Chapel Hill news release (also on EurekAlert but published Sept. 24, 2021), which originated the news item, describes the patch in greater detail (Note: Links have been removed),

Considered a breakthrough are the 3D-printed microneedles lined up on a polymer patch and barely long enough to reach the skin to deliver vaccine.

“In developing this technology, we hope to set the foundation for even more rapid global development of vaccines, at lower doses, in a pain- and anxiety-free manner,” said lead study author and entrepreneur in 3D print technology Joseph M. DeSimone, professor of translational medicine and chemical engineering at Stanford University and professor emeritus at UNC-Chapel Hill.

The ease and effectiveness of a vaccine patch sets the course for a new way to deliver vaccines that’s painless, less invasive than a shot with a needle and can be self-administered. 

Study results show the vaccine patch generated a significant T-cell and antigen-specific antibody response that was 50 times greater than a subcutaneous injection delivered under the skin

That heightened immune response could lead to dose sparing, with a microneedle vaccine patch using a smaller dose to generate a similar immune response as a vaccine delivered with a needle and syringe.

While microneedle patches have been studied for decades, the work by Carolina and Stanford overcomes some past challenges: through 3D printing, the microneedles can be easily customized to develop various vaccine patches for flu, measles, hepatitis or COVID-19 vaccines.

Advantages of the vaccine patch

The COVID-19 pandemic has been a stark reminder of the difference made with timely vaccination. But getting a vaccine typically requires a visit to a clinic or hospital.

There a health care provider obtains a vaccine from a refrigerator or freezer, fills a syringe with the liquid vaccine formulation and injects it into the arm.

Although this process seems simple, there are issues that can hinder mass vaccination – from cold storage of vaccines to needing trained professionals who can give the shots.

Meanwhile vaccine patches, which incorporate vaccine-coated microneedles that dissolve into the skin, could be shipped anywhere in the world without special handling and people can apply the patch themselves.

Moreover, the ease of using a vaccine patch may lead to higher vaccination rates.

How the patches are made

It’s generally a challenge to adapt microneedles to different vaccine types, said lead study author Shaomin Tian, researcher in the Department of Microbiology and Immunology in the UNC School of Medicine.

“These issues, coupled with manufacturing challenges, have arguably held back the field of microneedles for vaccine delivery,” she said.  

Most microneedle vaccines are fabricated with master templates to make molds. However, the molding of microneedles is not very versatile, and drawbacks include reduced needle sharpness during replication.

“Our approach allows us to directly 3D print the microneedles which gives us lots of design latitude for making the best microneedles from a performance and cost point-of-view,” Tian said.

The microneedle patches were 3D printed at the University of North Carolina at Chapel Hill using a CLIP prototype 3D printer that DeSimone invented and is produced by CARBON, a Silicon-Valley company he co-founded.

The team of microbiologists and chemical engineers are continuing to innovate by formulating RNA vaccines, like the Pfizer and Moderna COVID-19 vaccines, into microneedle patches for future testing.

“One of the biggest lessons we’ve learned during the pandemic is that innovation in science and technology can make or break a global response,” DeSimone said. “Thankfully we have biotech and health care workers pushing the envelope for us all.”

Additional study authors include Cassie Caudill, Jillian L. Perry, Kimon lliadis,  Addis T. Tessema and Beverly S. Mecham of UNC-Chapel Hill and Brian J. Lee of Stanford.  

Here’s a link to and a citation for the paper,

Transdermal vaccination via 3D-printed microneedles induces potent humoral and cellular immunity by Cassie Caudill, Jillian L. Perry, Kimon Iliadis, Addis T. Tessema, Brian J. Lee, Beverly S. Mecham, Shaomin Tian, and Joseph M. DeSimone. PNAS September 28, 2021 118 (39) e2102595118; DOI: https://doi.org/10.1073/pnas.2102595118

This paper appears to be open access.

*I have featured vaccine patches here before, this December 16, 2016 post (Australia’s nanopatch: a way to eliminate needle vaccinations) is one of many stretching back to 2009.

Who Cares? a series of Art/Sci Salon talks and exhibitions in February and March 2022

COVID-19 has put health care workers in a more than usually interesting position and the Art/Sci Salon in Toronto, Canada is ‘creatively’ addressing the old, new, and emerging stresses. From the Who Cares? events webpage (also in a February 8, 2022 notice received via email),

“Who Cares?” is a Speaker Series dedicated to fostering transdisciplinary conversations between doctors, writers, artists, and researchers on contemporary biopolitics of care and the urgent need to move towards more respectful, creative, and inclusive social practices of care in the wake of the systemic cracks made obvious by the pandemic.

About the Series

Critiques of the health care sector are certainly not new and have been put forward by workers and researchers in the medical sector and in the humanities alike. However, critique alone fails to consider the systemic issues that prevent well-meaning practitioners to make a difference. The goal of this series is to activate practical conversations between people who are already engaged in transforming the infrastructures and cultures of care but have few opportunities to speak to each other. These interdisciplinary dialogues will enable the sharing of emerging epistemologies, new material approaches and pedagogies that could take us beyond the current crisis. By engaging with the arts as research, our guests use the generative insights of poetic and artistic practices to zoom in on the crucial issues undermining holistic, dynamic and socially responsible forms of care. Furthermore, they champion transdisciplinary dialogues and multipronged approaches directed at changing the material and discursive practices of care. 

Who cares? asks the following important questions:

How do we lay the groundwork for sustainable practices of care, that is, care beyond ‘just-in-time’ interventions?

What strategies can we devise to foster genuine transdisciplinary approaches that move beyond the silo effects of specialization, address current uncritical trends towards technological delegation, and restore the centrality responsive/responsible human relations in healthcare delivery?

What practices can help ameliorate the atomizing pitfalls of turning the patient into data?

What pathways can we design to re-direct attention to long lasting care focused on a deeper understanding of the manifold relationalities between doctors, patients, communities, and the socio-environmental context?

How can the critically creative explorations of artists and writers contribute to building resilient communities of care that cultivate reciprocity, respect for the unpredictable temporalities of healing, and active listening?

How to build a capacious infrastructure of care able to address and mend the damages caused by ideologies of ultimate cure that pervade corporate approaches to healthcare funding and delivery?

The first event starts on February 14, 2022 (from the On care, beauty, and Where Things Touch webpage),

On care, beauty, and Where Things Touch

Bahar Orang (University of Toronto, Psychiatry)

Feb. 14 [2022], 10:30 am – 12-30 pm [ET]

This event will be online, please register HERE to participate. After registering, you will receive a confirmation email containing information about joining the meeting. 

A Conversation with Bahar Orang, author of Where Things Touch, on staying attuned to the fragile intimacies of care beyond the stifling demands of institutional environments. 

This short presentation will ask questions about care that move it beyond the carceral logics of hospital settings, particularly in psychiatry. Drawing from questions raised in my first book Where Things Touch, and my work with Doctors for Defunding Police (DFDP), I hope to pose the question of how to do the work of health care differently. As the pandemic has laid bare so much violence, it becomes imperative to engage in forms of political imaginativeness that proactively ask what are the forms that care can take, and does already take, in places other than the clinic or the hospital? 

Bahar Orang is a writer and clinician scholar in the Department of Psychiatry at the University of Toronto. Her creative and clinical work seeks to engage with ways of imagining care beyond the carcerality that medical institutions routinely reproduce

Here’s the full programme from the Who Cares? events webpage,

Opening dialogue
February 14, 10:30-12:30 pm [ET]
On care, beauty, and Where Things Touch

Bahar Orang, University of Toronto, Psychiatry

( Online)

Keynote
Thursday March 10, 1:00-3:00 pm [ET]
Keynote and public reveal of Data meditation

Salvatore Iaconesi and Oriana Persico
independent artists, HER, She Loves Data

(Online)

Roundtables
1. Friday, March 11 – 5:00 to 7:00 pm [ET]
Beyond triage and data culture

Maria Antonia Gonzalez-Valerio, Professor of Philosophy and Literature, UNAM, Mexico City.
Sharmistha Mishra, Infectious Disease Physician and Mathematical Modeller, St Michael’s Hospital
Madhur Anand, Ecologist, School of Environmental Sciences, University of Guelph
Salvatore Iaconesi and Oriana Persico, independent artists, HER, She Loves Data

(Online)

2. Friday, March 18 – 6:00 to 8:00 pm [ET]
Critical care and sustainable care

Suvendrini Lena, MD, Playwright and Neurologist at CAMH and Centre for Headache, Women’s College Hospital, Toronto
Adriana Ieraci, Roboticist and PhD candidate in Computer Science, Ryerson University
Lucia Gagliese – Pain Aging Lab, York University

(online)

3. Friday, March 25 – 5:00 to 7:00 pm [ET]
Building communities and technologies of care

Camille Baker, University for the Creative Arts, School of Film media and Performing Arts
Alanna Kibbe, independent artist, Toronto

(online)

Keynote Conversation
Friday, April 1, 5:00-7:00 pm [ET]
Seema Yasmin,  Director of Research and Education, Stanford Health Communication Initiative [Stanford University]
Bayo Akomolafe,  Chief Curator of The Emergence Network

(hybrid) William Doo Auditorium, 45 Willcox Street, Toronto

Exhibitions
March 24 – April 30

Alanna Kibbe – TRANSFORM: Exploring Languages of Healing. Opening March 31, 5 pm 
Canadian Language Museum, 2275 Bayview Avenue, York University Glendon Campus

(Hybrid event. Limited in person visits by appointment)

Camille Baker INTER/her. Opening April 7, 4 pm [ET]
Ivey Library, 20 Willcox Street, New College, University of Toronto

(Hybrid event. Limited in person visits by appointment)

Closing Presentation and Interactive Session
Karolina Żyniewicz – Signs of the time, Collecting
Biological Traces and Memories

Artist talk: April 8, 4:00-6:00 [ET]
Memory Collection: Apr 9, 2:00-4:00

* The format of this program and access might change with the medical situation

We wish to thank the generous support of the Social Science and Humanities Research Council of Canada,  New College, the D.G. Ivey Library, and the Institute for the History and Philosophy of Science and Technology at the University of Toronto; the Centre for Feminist Research, Sensorium Centre for Digital Arts and Technology, The Canadian Language Museum, the Departments of English and the School of Gender and Women’s Studies at York University. We also wish to thank the support of The Fields Institute for Research in Mathematical Sciences

This series is co-produced in collaboration with the ArtSci Salon

Hopefully, one of those times works for you.

Creating time crystals with a quantum computer

This November 30, 2021 news item on phys.org about time crystals caught my attention,

There is a huge global effort to engineer a computer capable of harnessing the power of quantum physics to carry out computations of unprecedented complexity. While formidable technological obstacles still stand in the way of creating such a quantum computer, today’s early prototypes are still capable of remarkable feats.

For example, the creation of a new phase of matter called a “time crystal.” Just as a crystal’s structure repeats in space, a time crystal repeats in time and, importantly, does so infinitely and without any further input of energy—like a clock that runs forever without any batteries. The quest to realize this phase of matter has been a longstanding challenge in theory and experiment—one that has now finally come to fruition.

In research published Nov. 30 [2021] in Nature, a team of scientists from Stanford University, Google Quantum AI, the Max Planck Institute for Physics of Complex Systems and Oxford University detail their creation of a time crystal using Google’s Sycamore quantum computing hardware.

The Google Sycamore chip used in the creation of a time crystal. Credit: Google Quantum AI [downloaded from https://phys.org/news/2021-11-physicists-crystals-quantum.html]

A November 30, 2021 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, delves further into the work and into the nature of time crystals,

“The big picture is that we are taking the devices that are meant to be the quantum computers of the future and thinking of them as complex quantum systems in their own right,” said Matteo Ippoliti, a postdoctoral scholar at Stanford and co-lead author of the work. “Instead of computation, we’re putting the computer to work as a new experimental platform to realize and detect new phases of matter.”

For the team, the excitement of their achievement lies not only in creating a new phase of matter but in opening up opportunities to explore new regimes in their field of condensed matter physics, which studies the novel phenomena and properties brought about by the collective interactions of many objects in a system. (Such interactions can be far richer than the properties of the individual objects.)

“Time-crystals are a striking example of a new type of non-equilibrium quantum phase of matter,” said Vedika Khemani, assistant professor of physics at Stanford and a senior author of the paper. “While much of our understanding of condensed matter physics is based on equilibrium systems, these new quantum devices are providing us a fascinating window into new non-equilibrium regimes in many-body physics.”

What a time crystal is and isn’t

The basic ingredients to make this time crystal are as follows: The physics equivalent of a fruit fly and something to give it a kick. The fruit fly of physics is the Ising model, a longstanding tool for understanding various physical phenomena – including phase transitions and magnetism – which consists of a lattice where each site is occupied by a particle that can be in two states, represented as a spin up or down.

During her graduate school years, Khemani, her doctoral advisor Shivaji Sondhi, then at Princeton University, and Achilleas Lazarides and Roderich Moessner at the Max Planck Institute for Physics of Complex Systems stumbled upon this recipe for making time crystals unintentionally. They were studying non-equilibrium many-body localized systems – systems where the particles get “stuck” in the state in which they started and can never relax to an equilibrium state. They were interested in exploring phases that might develop in such systems when they are periodically “kicked” by a laser. Not only did they manage to find stable non-equilibrium phases, they found one where the spins of the particles flipped between patterns that repeat in time forever, at a period twice that of the driving period of the laser, thus making a time crystal.

The periodic kick of the laser establishes a specific rhythm to the dynamics. Normally the “dance” of the spins should sync up with this rhythm, but in a time crystal it doesn’t. Instead, the spins flip between two states, completing a cycle only after being kicked by the laser twice. This means that the system’s “time translation symmetry” is broken. Symmetries play a fundamental role in physics, and they are often broken – explaining the origins of regular crystals, magnets and many other phenomena; however, time translation symmetry stands out because unlike other symmetries, it can’t be broken in equilibrium. The periodic kick is a loophole that makes time crystals possible.

The doubling of the oscillation period is unusual, but not unprecedented. And long-lived oscillations are also very common in the quantum dynamics of few-particle systems. What makes a time crystal unique is that it’s a system of millions of things that are showing this kind of concerted behavior without any energy coming in or leaking out.

“It’s a completely robust phase of matter, where you’re not fine-tuning parameters or states but your system is still quantum,” said Sondhi, professor of physics at Oxford and co-author of the paper. “There’s no feed of energy, there’s no drain of energy, and it keeps going forever and it involves many strongly interacting particles.”

While this may sound suspiciously close to a “perpetual motion machine,” a closer look reveals that time crystals don’t break any laws of physics. Entropy – a measure of disorder in the system – remains stationary over time, marginally satisfying the second law of thermodynamics by not decreasing.

Between the development of this plan for a time crystal and the quantum computer experiment that brought it to reality, many experiments by many different teams of researchers achieved various almost-time-crystal milestones. However, providing all the ingredients in the recipe for “many-body localization” (the phenomenon that enables an infinitely stable time crystal) had remained an outstanding challenge.

For Khemani and her collaborators, the final step to time crystal success was working with a team at Google Quantum AI. Together, this group used Google’s Sycamore quantum computing hardware to program 20 “spins” using the quantum version of a classical computer’s bits of information, known as qubits.

Revealing just how intense the interest in time crystals currently is, another time crystal was published in Science this month [November 2021]. That crystal was created using qubits within a diamond by researchers at Delft University of Technology in the Netherlands.

Quantum opportunities

The researchers were able to confirm their claim of a true time crystal thanks to special capabilities of the quantum computer. Although the finite size and coherence time of the (imperfect) quantum device meant that their experiment was limited in size and duration – so that the time crystal oscillations could only be observed for a few hundred cycles rather than indefinitely – the researchers devised various protocols for assessing the stability of their creation. These included running the simulation forward and backward in time and scaling its size.

“We managed to use the versatility of the quantum computer to help us analyze its own limitations,” said Moessner, co-author of the paper and director at the Max Planck Institute for Physics of Complex Systems. “It essentially told us how to correct for its own errors, so that the fingerprint of ideal time-crystalline behavior could be ascertained from finite time observations.”

A key signature of an ideal time crystal is that it shows indefinite oscillations from all states. Verifying this robustness to choice of states was a key experimental challenge, and the researchers devised a protocol to probe over a million states of their time crystal in just a single run of the machine, requiring mere milliseconds of runtime. This is like viewing a physical crystal from many angles to verify its repetitive structure.

“A unique feature of our quantum processor is its ability to create highly complex quantum states,” said Xiao Mi, a researcher at Google and co-lead author of the paper. “These states allow the phase structures of matter to be effectively verified without needing to investigate the entire computational space – an otherwise intractable task.”

Creating a new phase of matter is unquestionably exciting on a fundamental level. In addition, the fact that these researchers were able to do so points to the increasing usefulness of quantum computers for applications other than computing. “I am optimistic that with more and better qubits, our approach can become a main method in studying non-equilibrium dynamics,” said Pedram Roushan, researcher at Google and senior author of the paper.

“We think that the most exciting use for quantum computers right now is as platforms for fundamental quantum physics,” said Ippoliti. “With the unique capabilities of these systems, there’s hope that you might discover some new phenomenon that you hadn’t predicted.”

A view of the Google dilution refrigerator, which houses the Sycamore chip. Credit: Google Quantum AI [downloaded from https://scitechdaily.com/stanford-and-google-team-up-to-create-time-crystals-with-quantum-computers/]

Here’s a link to and a citation for the paper,

Time-Crystalline Eigenstate Order on a Quantum Processor by Xiao Mi, Matteo Ippoliti, Chris Quintana, Ami Greene, Zijun Chen, Jonathan Gross, Frank Arute, Kunal Arya, Juan Atalaya, Ryan Babbush, Joseph C. Bardin, Joao Basso, Andreas Bengtsson, Alexander Bilmes, Alexandre Bourassa, Leon Brill, Michael Broughton, Bob B. Buckley, David A. Buell, Brian Burkett, Nicholas Bushnell, Benjamin Chiaro, Roberto Collins, William Courtney, Dripto Debroy, Sean Demura, Alan R. Derk, Andrew Dunsworth, Daniel Eppens, Catherine Erickson, Edward Farhi, Austin G. Fowler, Brooks Foxen, Craig Gidney, Marissa Giustina, Matthew P. Harrigan, Sean D. Harrington, Jeremy Hilton, Alan Ho, Sabrina Hong, Trent Huang, Ashley Huff, William J. Huggins, L. B. Ioffe, Sergei V. Isakov, Justin Iveland, Evan Jeffrey, Zhang Jiang, Cody Jones, Dvir Kafri, Tanuj Khattar, Seon Kim, Alexei Kitaev, Paul V. Klimov, Alexander N. Korotkov, Fedor Kostritsa, David Landhuis, Pavel Laptev, Joonho Lee, Kenny Lee, Aditya Locharla, Erik Lucero, Orion Martin, Jarrod R. McClean, Trevor McCourt, Matt McEwen, Kevin C. Miao, Masoud Mohseni, Shirin Montazeri, Wojciech Mruczkiewicz, Ofer Naaman, Matthew Neeley, Charles Neill, Michael Newman, Murphy Yuezhen Niu, Thomas E. O’Brien, Alex Opremcak, Eric Ostby, Balint Pato, Andre Petukhov, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vladimir Shvarts, Yuan Su, Doug Strain, Marco Szalay, Matthew D. Trevithick, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Juhwan Yoo, Adam Zalcman, Hartmut Neven, Sergio Boixo, Vadim Smelyanskiy, Anthony Megrant, Julian Kelly, Yu Chen, S. L. Sondhi, Roderich Moessner, Kostyantyn Kechedzhi, Vedika Khemani & Pedram Roushan. Nature (2021) DOI: https://doi.org/10.1038/s41586-021-04257-w Published 30 November 2021

This is a preview of the unedited paper being provided by Nature. Click on the Download PDF button (to the right of the title) to get access.

True love with AI (artificial intelligence): The Nature of Things explores emotional and creative AI (long read)

The Canadian Broadcasting Corporation’s (CBC) science television series,The Nature of Things, which has been broadcast since November 1960, explored the world of emotional, empathic and creative artificial intelligence (AI) in a Friday, November 19, 2021 telecast titled, The Machine That Feels,

The Machine That Feels explores how artificial intelligence (AI) is catching up to us in ways once thought to be uniquely human: empathy, emotional intelligence and creativity.

As AI moves closer to replicating humans, it has the potential to reshape every aspect of our world – but most of us are unaware of what looms on the horizon.

Scientists see AI technology as an opportunity to address inequities and make a better, more connected world. But it also has the capacity to do the opposite: to stoke division and inequality and disconnect us from fellow humans. The Machine That Feels, from The Nature of Things, shows viewers what they need to know about a field that is advancing at a dizzying pace, often away from the public eye.

What does it mean when AI makes art? Can AI interpret and understand human emotions? How is it possible that AI creates sophisticated neural networks that mimic the human brain? The Machine That Feels investigates these questions, and more.

In Vienna, composer Walter Werzowa has — with the help of AI — completed Beethoven’s previously unfinished 10th symphony. By feeding data about Beethoven, his music, his style and the original scribbles on the 10th symphony into an algorithm, AI has created an entirely new piece of art.

In Atlanta, Dr. Ayanna Howard and her robotics lab at Georgia Tech are teaching robots how to interpret human emotions. Where others see problems, Howard sees opportunity: how AI can help fill gaps in education and health care systems. She believes we need a fundamental shift in how we perceive robots: let’s get humans and robots to work together to help others.

At Tufts University in Boston, a new type of biological robot has been created: the xenobot. The size of a grain of sand, xenobots are grown from frog heart and skin cells, and combined with the “mind” of a computer. Programmed with a specific task, they can move together to complete it. In the future, they could be used for environmental cleanup, digesting microplastics and targeted drug delivery (like releasing chemotherapy compounds directly into tumours).

The film includes interviews with global leaders, commentators and innovators from the AI field, including Geoff Hinton, Yoshua Bengio, Ray Kurzweil and Douglas Coupland, who highlight some of the innovative and cutting-edge AI technologies that are changing our world.

The Machine That Feels focuses on one central question: in the flourishing age of artificial intelligence, what does it mean to be human?

I’ll get back to that last bit, “… what does it mean to be human?” later.

There’s a lot to appreciate in this 44 min. programme. As you’d expect, there was a significant chunk of time devoted to research being done in the US but Poland and Japan also featured and Canadian content was substantive. A number of tricky topics were covered and transitions from one topic to the next were smooth.

In the end credits, I counted over 40 source materials from Getty Images, Google Canada, Gatebox, amongst others. It would have been interesting to find out which segments were produced by CBC.

David Suzuki’s (programme host) script was well written and his narration was enjoyable, engaging, and non-intrusive. That last quality is not always true of CBC hosts who can fall into the trap of overdramatizing the text.

Drilling down

I have followed artificial intelligence stories in a passive way (i.e., I don’t seek them out) for many years. Even so, there was a lot of material in the programme that was new to me.

For example, there was this love story (from the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage on the CBC),

In the The Machine That Feels, a documentary from The Nature of Things, we meet Kondo Akihiko, a Tokyo resident who “married” a hologram of virtual pop singer Hatsune Miku using a certificate issued by Gatebox (the marriage isn’t recognized by the state, and Gatebox acknowledges the union goes “beyond dimensions”).

I found Akihiko to be quite moving when he described his relationship, which is not unique. It seems some 4,000 men have ‘wed’ their digital companions, you can read about that and more on the ‘I love her and see her as a real woman.’ Meet a man who ‘married’ an artificial intelligence hologram webpage.

What does it mean to be human?

Overall, this Nature of Things episode embraces certainty, which means the question of what it means to human is referenced rather than seriously discussed. An unanswerable philosophical question, the programme is ill-equipped to address it, especially since none of the commentators are philosophers or seem inclined to philosophize.

The programme presents AI as a juggernaut. Briefly mentioned is the notion that we need to make some decisions about how our juggernaut is developed and utilized. No one discusses how we go about making changes to systems that are already making critical decisions for us. (For more about AI and decision-making, see my February 28, 2017 posting and scroll down to the ‘Algorithms and big data’ subhead for Cathy O’Neil’s description of how important decisions that affect us are being made by AI systems. She is the author of the 2016 book, ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’; still a timely read.)

In fact, the programme’s tone is mostly one of breathless excitement. A few misgivings are expressed, e.g,, one woman who has an artificial ‘texting friend’ (Replika; a chatbot app) noted that it can ‘get into your head’ when she had a chat where her ‘friend’ told her that all of a woman’s worth is based on her body; she pushed back but intimated that someone more vulnerable could find that messaging difficult to deal with.

The sequence featuring Akihiko and his hologram ‘wife’ is followed by one suggesting that people might become more isolated and emotionally stunted as they interact with artificial friends. It should be noted, Akihiko’s wife is described as ‘perfect’. I gather perfection means that you are always understanding and have no needs of your own. She also seems to be about 18″ high.

Akihiko has obviously been asked about his ‘wife’ before as his answers are ready. They boil down to “there are many types of relationships” and there’s nothing wrong with that. It’s an intriguing thought which is not explored.

Also unexplored, these relationships could be said to resemble slavery. After all, you pay for these friends over which you have control. But perhaps that’s alright since AI friends don’t have consciousness. Or do they? In addition to not being able to answer the question, “what is it to be human?” we still can’t answer the question, “what is consciousness?”

AI and creativity

The Nature of Things team works fast. ‘Beethoven X – The AI Project’ had its first performance on October 9, 2021. (See my October 1, 2021 post ‘Finishing Beethoven’s unfinished 10th Symphony’ for more information from Ahmed Elgammal’s (Director of the Art & AI Lab at Rutgers University) technical perspective on the project.

Briefly, Beethoven died before completing his 10th symphony and a number of computer scientists, musicologists, AI, and musicians collaborated to finish the symphony.)

The one listener (Felix Mayer, music professor at the Technical University Munich) in the hall during a performance doesn’t consider the work to be a piece of music. He does have a point. Beethoven left some notes but this ’10th’ is at least partly mathematical guesswork. A set of probabilities where an algorithm chooses which note comes next based on probability.

There was another artist also represented in the programme. Puzzlingly, it was the still living Douglas Coupland. In my opinion, he’s better known as a visual artist than a writer (his Wikipedia entry lists him as a novelist first) but he has succeeded greatly in both fields.

What makes his inclusion in the Nature of Things ‘The Machine That Feels’ programme puzzling, is that it’s not clear how he worked with artificial intelligence in a collaborative fashion. Here’s a description of Coupland’s ‘AI’ project from a June 29, 2021 posting by Chris Henry on the Google Outreach blog (Note: Links have been removed),

… when the opportunity presented itself to explore how artificial intelligence (AI) inspires artistic expression — with the help of internationally renowned Canadian artist Douglas Coupland — the Google Research team jumped on it. This collaboration, with the support of Google Arts & Culture, culminated in a project called Slogans for the Class of 2030, which spotlights the experiences of the first generation of young people whose lives are fully intertwined with the existence of AI. 

This collaboration was brought to life by first introducing Coupland’s written work to a machine learning language model. Machine learning is a form of AI that provides computer systems the ability to automatically learn from data. In this case, Google research scientists tuned a machine learning algorithm with Coupland’s 30-year body of written work — more than a million words — so it would familiarize itself with the author’s unique style of writing. From there, curated general-public social media posts on selected topics were added to teach the algorithm how to craft short-form, topical statements. [emphases mine]

Once the algorithm was trained, the next step was to process and reassemble suggestions of text for Coupland to use as inspiration to create twenty-five Slogans for the Class of 2030. [emphasis mine]

I would comb through ‘data dumps’ where characters from one novel were speaking with those in other novels in ways that they might actually do. It felt like I was encountering a parallel universe Doug,” Coupland says. “And from these outputs, the statements you see here in this project appeared like gems. Did I write them? Yes. No. Could they have existed without me? No.” [emphases mine]

So, the algorithms crunched through Coupland’s word and social media texts to produce slogans, which Coupland then ‘combed through’ to pick out 25 slogans for the ‘Slogans For The Class of 2030’ project. (Note: In the programme, he says that he started a sentence and then the AI system completed that sentence with material gleaned from his own writings, which brings to Exquisite Corpse, a collaborative game for writers originated by the Surrealists, possibly as early as 1918.)

The ‘slogans’ project also reminds me of William S. Burroughs and the cut-up technique used in his work. From the William S. Burroughs Cut-up technique webpage on the Language is a Virus website (Thank you to Lake Rain Vajra for a very interesting website),

The cutup is a mechanical method of juxtaposition in which Burroughs literally cuts up passages of prose by himself and other writers and then pastes them back together at random. This literary version of the collage technique is also supplemented by literary use of other media. Burroughs transcribes taped cutups (several tapes spliced into each other), film cutups (montage), and mixed media experiments (results of combining tapes with television, movies, or actual events). Thus Burroughs’s use of cutups develops his juxtaposition technique to its logical conclusion as an experimental prose method, and he also makes use of all contemporary media, expanding his use of popular culture.

[Burroughs says] “All writing is in fact cut-ups. A collage of words read heard overheard. What else? Use of scissors renders the process explicit and subject to extension and variation. Clear classical prose can be composed entirely of rearranged cut-ups. Cutting and rearranging a page of written words introduces a new dimension into writing enabling the writer to turn images in cinematic variation. Images shift sense under the scissors smell images to sound sight to sound to kinesthetic. This is where Rimbaud was going with his color of vowels. And his “systematic derangement of the senses.” The place of mescaline hallucination: seeing colors tasting sounds smelling forms.

“The cut-ups can be applied to other fields than writing. Dr Neumann [emphasis mine] in his Theory of Games and Economic behavior introduces the cut-up method of random action into game and military strategy: assume that the worst has happened and act accordingly. … The cut-up method could be used to advantage in processing scientific data. [emphasis mine] How many discoveries have been made by accident? We cannot produce accidents to order. The cut-ups could add new dimension to films. Cut gambling scene in with a thousand gambling scenes all times and places. Cut back. Cut streets of the world. Cut and rearrange the word and image in films. There is no reason to accept a second-rate product when you can have the best. And the best is there for all. Poetry is for everyone . . .”

First, John von Neumann (1902 – 57) is a very important figure in the history of computing. From a February 25, 2017 John von Neumann and Modern Computer Architecture essay on the ncLab website, “… he invented the computer architecture that we use today.”

Here’s Burroughs on the history of writers and cutups (thank you to QUEDEAR for posting this clip),

You can hear Burroughs talk about the technique and how he started using it in 1959.

There is no explanation from Coupland as to how his project differs substantively from Burroughs’ cut-ups or a session of Exquisite Corpse. The use of a computer programme to crunch through data and give output doesn’t seem all that exciting. *(More about computers and chatbots at end of posting).* It’s hard to know if this was an interview situation where he wasn’t asked the question or if the editors decided against including it.

Kazuo Ishiguro?

Given that Ishiguro’s 2021 book (Klara and the Sun) is focused on an artificial friend and raises the question of ‘what does it mean to be human’, as well as the related question, ‘what is the nature of consciousness’, it would have been interesting to hear from him. He spent a fair amount of time looking into research on machine learning in preparation for his book. Maybe he was too busy?

AI and emotions

The work being done by Georgia Tech’s Dr. Ayanna Howard and her robotics lab is fascinating. They are teaching robots how to interpret human emotions. The segment which features researchers teaching and interacting with robots, Pepper and Salt, also touches on AI and bias.

Watching two African American researchers talk about the ways in which AI is unable to read emotions on ‘black’ faces as accurately as ‘white’ faces is quite compelling. It also reinforces the uneasiness you might feel after the ‘Replika’ segment where an artificial friend informs a woman that her only worth is her body.

(Interestingly, Pepper and Salt are produced by Softbank Robotics, part of Softbank, a multinational Japanese conglomerate, [see a June 28, 2021 article by Ian Carlos Campbell for The Verge] whose entire management team is male according to their About page.)

While Howard is very hopeful about the possibilities of a machine that can read emotions, she doesn’t explore (on camera) any means for pushing back against bias other than training AI by using more black faces to help them learn. Perhaps more representative management and coding teams in technology companies?

While the programme largely focused on AI as an algorithm on a computer, robots can be enabled by AI (as can be seen in the segment with Dr. Howard).

My February 14, 2019 posting features research with a completely different approach to emotions and machines,

“I’ve always felt that robots shouldn’t just be modeled after humans [emphasis mine] or be copies of humans,” he [Guy Hoffman, assistant professor at Cornell University)] said. “We have a lot of interesting relationships with other species. Robots could be thought of as one of those ‘other species,’ not trying to copy what we do but interacting with us with their own language, tapping into our own instincts.”

[from a July 16, 2018 Cornell University news release on EurekAlert]

This brings the question back to, what is consciousness?

What scientists aren’t taught

Dr. Howard notes that scientists are not taught to consider the implications of their work. Her comment reminded me of a question I was asked many years ago after a presentation, it concerned whether or not science had any morality. (I said, no.)

My reply angered an audience member (a visual artist who was working with scientists at the time) as she took it personally and started defending scientists as good people who care and have morals and values. She failed to understand that the way in which we teach science conforms to a notion that somewhere there are scientific facts which are neutral and objective. Society and its values are irrelevant in the face of the larger ‘scientific truth’ and, as a consequence, you don’t need to teach or discuss how your values or morals affect that truth or what the social implications of your work might be.

Science is practiced without much if any thought to values. By contrast, there is the medical injunction, “Do no harm,” which suggests to me that someone recognized competing values. E.g., If your important and worthwhile research is harming people, you should ‘do no harm’.

The experts, the connections, and the Canadian content

It’s been a while since I’ve seen Ray Kurzweil mentioned but he seems to be getting more attention these days. (See this November 16, 2021 posting by Jonny Thomson titled, “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” on The Big Think for more). Note: I will have a little more about evolution later in this post.

Interestingly, Kurzweil is employed by Google these days (see his Wikipedia entry, the column to the right). So is Geoffrey Hinton, another one of the experts in the programme (see Hinton’s Wikipedia entry, the column to the right, under Institutions).

I’m not sure about Yoshu Bengio’s relationship with Google but he’s a professor at the Université de Montréal, and he’s the Scientific Director for Mila ((Quebec’s Artificial Intelligence research institute)) & IVADO (Institut de valorisation des données), Note: IVADO is not particularly relevant to what’s being discussed in this post.

As for Mila, the Canada Google blog in a November 21, 2016 posting notes a $4.5M grant to the institution,

Google invests $4.5 Million in Montreal AI Research

A new grant from Google for the Montreal Institute for Learning Algorithms (MILA) will fund seven faculty across a number of Montreal institutions and will help tackle some of the biggest challenges in machine learning and AI, including applications in the realm of systems that can understand and generate natural language. In other words, better understand a fan’s enthusiasm for Les Canadien [sic].

Google is expanding its academic support of deep learning at MILA, renewing Yoshua Bengio’s Focused Research Award and offering Focused Research Awards to MILA faculty at University of Montreal and McGill University:

Google reaffirmed their commitment to Mila in 2020 with a grant worth almost $4M (from a November 13, 2020 posting on the Mila website, Note: A link has been removed),

Google Canada announced today [November 13, 2020] that it will be renewing its funding of Mila – Quebec Artificial Intelligence Institute, with a generous pledge of nearly $4M over a three-year period. Google previously invested $4.5M US in 2016, enabling Mila to grow from 25 to 519 researchers.

In a piece written for Google’s Official Canada Blog, Yoshua Bengio, Mila Scientific Director, says that this year marked a “watershed moment for the Canadian AI community,” as the COVID-19 pandemic created unprecedented challenges that demanded rapid innovation and increased interdisciplinary collaboration between researchers in Canada and around the world.

COVID-19 has changed the world forever and many industries, from healthcare to retail, will need to adapt to thrive in our ‘new normal.’ As we look to the future and how priorities will shift, it is clear that AI is no longer an emerging technology but a useful tool that can serve to solve world problems. Google Canada recognizes not only this opportunity but the important task at hand and I’m thrilled they have reconfirmed their support of Mila with an additional $3,95 million funding grant until 22.

– Yoshua Bengio, for Google’s Official Canada Blog

Interesting, eh? Of course, Douglas Coupland is working with Google, presumably for money, and that would connect over 50% of the Canadian content (Douglas Coupland, Yoshua Bengio, and Geoffrey Hinton; Kurzweil is an American) in the programme to Google.

My hat’s off to Google’s marketing communications and public relations teams.

Anthony Morgan of Science Everywhere also provided some Canadian content. His LinkedIn profile indicates that he’s working on a PhD in molecular science, which is described this way, “My work explores the characteristics of learning environments, that support critical thinking and the relationship between critical thinking and wisdom.”

Morgan is also the founder and creative director of Science Everywhere, from his LinkedIn profile, “An events & media company supporting knowledge mobilization, community engagement, entrepreneurship and critical thinking. We build social tools for better thinking.”

There is this from his LinkedIn profile,

I develop, create and host engaging live experiences & media to foster critical thinking.

I’ve spent my 15+ years studying and working in psychology and science communication, thinking deeply about the most common individual and societal barriers to critical thinking. As an entrepreneur, I lead a team to create, develop and deploy cultural tools designed to address those barriers. As a researcher I study what we can do to reduce polarization around science.

There’s a lot more to Morgan (do look him up; he has connections to the CBC and other media outlets). The difficulty is: why was he chosen to talk about artificial intelligence and emotions and creativity when he doesn’t seem to know much about the topic? He does mention GPT-3, an AI programming language. He seems to be acting as an advocate for AI although he offers this bit of almost cautionary wisdom, “… algorithms are sets of instructions.” (You can can find out more about it in my April 27, 2021 posting. There’s also this November 26, 2021 posting [The Inherent Limitations of GPT-3] by Andrey Kurenkov, a PhD student with the Stanford [University] Vision and Learning Lab.)

Most of the cautionary commentary comes from Luke Stark, assistant professor at Western [Ontario] University’s Faculty of Information and Media Studies. He’s the one who mentions stunted emotional growth.

Before moving on, there is another set of connections through the Pan-Canadian Artificial Intelligence Strategy, a Canadian government science funding initiative announced in the 2017 federal budget. The funds allocated to the strategy are administered by the Canadian Institute for Advanced Research (CIFAR). Yoshua Bengio through Mila is associated with the strategy and CIFAR, as is Geoffrey Hinton through his position as Chief Scientific Advisor for the Vector Institute.

Evolution

Getting back to “The Singularity: When will we all become super-humans? Are we really only a moment away from “The Singularity,” a technological epoch that will usher in a new era in human evolution?” Xenobots point in a disconcerting (for some of us) evolutionary direction.

I featured the work, which is being done at Tufts University in the US, in my June 21, 2021 posting, which includes an embedded video,

From a March 31, 2021 news item on ScienceDaily,

Last year, a team of biologists and computer scientists from Tufts University and the University of Vermont (UVM) created novel, tiny self-healing biological machines from frog cells called “Xenobots” that could move around, push a payload, and even exhibit collective behavior in the presence of a swarm of other Xenobots.

Get ready for Xenobots 2.0.

Also from an excerpt in the posting, the team has “created life forms that self-assemble a body from single cells, do not require muscle cells to move, and even demonstrate the capability of recordable memory.”

Memory is key to intelligence and this work introduces the notion of ‘living’ robots which leads to questioning what constitutes life. ‘The Machine That Feels’ is already grappling with far too many questions to address this development but introducing the research here might have laid the groundwork for the next episode, The New Human, telecast on November 26, 2021,

While no one can be certain what will happen, evolutionary biologists and statisticians are observing trends that could mean our future feet only have four toes (so long, pinky toe) or our faces may have new combinations of features. The new humans might be much taller than their parents or grandparents, or have darker hair and eyes.

And while evolution takes a lot of time, we might not have to wait too long for a new version of ourselves.

Technology is redesigning the way we look and function — at a much faster pace than evolution. We are merging with technology more than ever before: our bodies may now have implanted chips, smart limbs, exoskeletons and 3D-printed organs. A revolutionary gene editing technique has given us the power to take evolution into our own hands and alter our own DNA. How long will it be before we are designing our children?

As the story about the xenobots doesn’t say, we could also take the evolution of another species into our hands.

David Suzuki, where are you?

Our programme host, David Suzuki surprised me. I thought that as an environmentalist he’d point out that the huge amounts of computing power needed for artificial intelligence as mentioned in the programme, constitutes an environmental issue. I also would have expected a geneticist like Suzuki might have some concerns with regard to xenobots but perhaps that’s being saved for the next episode (The New Human) of the Nature of Things.

Artificial stupidity

Thanks to Will Knight for introducing me to the term ‘artificial stupidity’. Knight, a senior writer covers artificial intelligence for WIRED magazine. According to its Wikipedia entry,

Artificial stupidity is commonly used as a humorous opposite of the term artificial intelligence (AI), often as a derogatory reference to the inability of AI technology to adequately perform its tasks.[1] However, within the field of computer science, artificial stupidity is also used to refer to a technique of “dumbing down” computer programs in order to deliberately introduce errors in their responses.

Knight was using the term in its humorous, derogatory form.

Finally

The episode certainly got me thinking if not quite in the way producers might have hoped. ‘The Machine That Feels’ is a glossy, pretty well researched piece of infotainment.

To be blunt, I like and have no problems with infotainment but it can be seductive. I found it easier to remember the artificial friends, wife, xenobots, and symphony than the critiques and concerns.

Hopefully, ‘The Machine That Feels’ stimulates more interest in some very important topics. If you missed the telecast, you can catch the episode here.

For anyone curious about predictive policing, which was mentioned in the Ayanna Howard segment, see my November 23, 2017 posting about Vancouver’s plunge into AI and car theft.

*ETA December 6, 2021: One of the first ‘chatterbots’ was ELIZA, a computer programme developed from1964 to 1966. The most famous ELIZA script was DOCTOR, where the programme simulated a therapist. Many early users believed ELIZA understood and could respond as a human would despite Joseph Weizenbaum’s (creator of the programme) insistence otherwise.

A graphene ‘camera’ and your beating heart: say cheese

Comparing it to a ‘camera’, even with the quotes, is a bit of a stretch for my taste but I can’t come up with a better comparison. Here’s a video so you can judge for yourself,

Caption: This video repeats three times the graphene camera images of a single beat of an embryonic chicken heart. The images, separated by 5 milliseconds, were measured by a laser bouncing off a graphene sheet lying beneath the heart. The images are about 2 millimeters on a side. Credit: UC Berkeley images by Halleh Balch, Alister McGuire and Jason Horng

A June 16, 2021 news item on ScienceDaily announces the research,

Bay Area [San Francisco, California] scientists have captured the real-time electrical activity of a beating heart, using a sheet of graphene to record an optical image — almost like a video camera — of the faint electric fields generated by the rhythmic firing of the heart’s muscle cells.

A University of California at Berkeley (UC Berkeley) June 16, 2021 news release (also on EurekAlert) by Robert Sanders, which originated the news item, provides more detail,

The graphene camera represents a new type of sensor useful for studying cells and tissues that generate electrical voltages, including groups of neurons or cardiac muscle cells. To date, electrodes or chemical dyes have been used to measure electrical firing in these cells. But electrodes and dyes measure the voltage at one point only; a graphene sheet measures the voltage continuously over all the tissue it touches.

The development, published online last week in the journal Nano Letters, comes from a collaboration between two teams of quantum physicists at the University of California, Berkeley, and physical chemists at Stanford University.

“Because we are imaging all cells simultaneously onto a camera, we don’t have to scan, and we don’t have just a point measurement. We can image the entire network of cells at the same time,” said Halleh Balch, one of three first authors of the paper and a recent Ph.D. recipient in UC Berkeley’s Department of Physics.

While the graphene sensor works without having to label cells with dyes or tracers, it can easily be combined with standard microscopy to image fluorescently labeled nerve or muscle tissue while simultaneously recording the electrical signals the cells use to communicate.

“The ease with which you can image an entire region of a sample could be especially useful in the study of neural networks that have all sorts of cell types involved,” said another first author of the study, Allister McGuire, who recently received a Ph.D. from Stanford and. “If you have a fluorescently labeled cell system, you might only be targeting a certain type of neuron. Our system would allow you to capture electrical activity in all neurons and their support cells with very high integrity, which could really impact the way that people do these network level studies.”

Graphene is a one-atom thick sheet of carbon atoms arranged in a two-dimensional hexagonal pattern reminiscent of honeycomb. The 2D structure has captured the interest of physicists for several decades because of its unique electrical properties and robustness and its interesting optical and optoelectronic properties.

“This is maybe the first example where you can use an optical readout of 2D materials to measure biological electrical fields,” said senior author Feng Wang, UC Berkeley professor of physics. “People have used 2D materials to do some sensing with pure electrical readout before, but this is unique in that it works with microscopy so that you can do parallel detection.”

The team calls the tool a critically coupled waveguide-amplified graphene electric field sensor, or CAGE sensor.

“This study is just a preliminary one; we want to showcase to biologists that there is such a tool you can use, and you can do great imaging. It has fast time resolution and great electric field sensitivity,” said the third first author, Jason Horng, a UC Berkeley Ph.D. recipient who is now a postdoctoral fellow at the National Institute of Standards and Technology. “Right now, it is just a prototype, but in the future, I think we can improve the device.”

Graphene is sensitive to electric fields

Ten years ago, Wang discovered that an electric field affects how graphene reflects or absorbs light. Balch and Horng exploited this discovery in designing the graphene camera. They obtained a sheet of graphene about 1 centimeter on a side produced by chemical vapor deposition in the lab of UC Berkeley physics professor Michael Crommie and placed on it a live heart from a chicken embryo, freshly extracted from a fertilized egg. These experiments were performed in the Stanford lab of Bianxiao Cui, who develops nanoscale tools to study electrical signaling in neurons and cardiac cells.

The team showed that when the graphene was tuned properly, the electrical signals that flowed along the surface of the heart during a beat were sufficient to change the reflectance of the graphene sheet.

“When cells contract, they fire action potentials that generate a small electric field outside of the cell,” Balch said. “The absorption of graphene right under that cell is modified, so we will see a change in the amount of light that comes back from that position on the large area of graphene.”

In initial studies, however, Horng found that the change in reflectance was too small to detect easily. An electric field reduces the reflectance of graphene by at most 2%; the effect was much less from changes in the electric field when the heart muscle cells fired an action potential.

Together, Balch, Horng and Wang found a way to amplify this signal by adding a thin waveguide below graphene, forcing the reflected laser light to bounce internally about 100 times before escaping. This made the change in reflectance detectable by a normal optical video camera.

“One way of thinking about it is that the more times that light bounces off of graphene as it propagates through this little cavity, the more effects that light feels from graphene’s response, and that allows us to obtain very, very high sensitivity to electric fields and voltages down to microvolts,” Balch said.

The increased amplification necessarily lowers the resolution of the image, but at 10 microns, it is more than enough to study cardiac cells that are several tens of microns across, she said.

Another application, McGuire said, is to test the effect of drug candidates on heart muscle before these drugs go into clinical trials to see whether, for example, they induce an unwanted arrhythmia. To demonstrate this, he and his colleagues observed the beating chicken heart with CAGE and an optical microscope while infusing it with a drug, blebbistatin, that inhibits the muscle protein myosin. They observed the heart stop beating, but CAGE showed that the electrical signals were unaffected.

Because graphene sheets are mechanically tough, they could also be placed directly on the surface of the brain to get a continuous measure of electrical activity — for example, to monitor neuron firing in the brains of those with epilepsy or to study fundamental brain activity. Today’s electrode arrays measure activity at a few hundred points, not continuously over the brain surface.

“One of the things that is amazing to me about this project is that electric fields mediate chemical interactions, mediate biophysical interactions — they mediate all sorts of processes in the natural world — but we never measure them. We measure current, and we measure voltage,” Balch said. “The ability to actually image electric fields gives you a look at a modality that you previously had little insight into.”

Here’s a link to and a citation for the paper,

Graphene Electric Field Sensor Enables Single Shot Label-Free Imaging of Bioelectric Potentials by Halleh B. Balch, Allister F. McGuire, Jason Horng, Hsin-Zon Tsai, Kevin K. Qi, Yi-Shiou Duh, Patrick R. Forrester, Michael F. Crommie, Bianxiao Cui, and Feng Wang. Nano Lett. 2021, XXXX, XXX, XXX-XXX OI: https://doi.org/10.1021/acs.nanolett.1c00543 Publication Date: June 8, 2021 © 2021 American Chemical Society

This paper is behind a paywall.

An algorithm for modern quilting

Caption: Each of the blocks in this quilt were designed using an algorithm-based tool developed by Stanford researchers. Credit: Mackenzie Leake

I love the colours. This research into quilting and artificial intelligence (AI) was presented at SIGGRAPH 2021 in August. (SIGGRAPH is, also known as, ACM SIGGRAPH or ‘Association for Computing Machinery’s Special Interest Group on Computer Graphics and Interactive Techniques’.)

A June 3, 2021 news item on ScienceDaily announced the presentation,

Stanford University computer science graduate student Mackenzie Leake has been quilting since age 10, but she never imagined the craft would be the focus of her doctoral dissertation. Included in that work is new prototype software that can facilitate pattern-making for a form of quilting called foundation paper piecing, which involves using a backing made of foundation paper to lay out and sew a quilted design.

Developing a foundation paper piece quilt pattern — which looks similar to a paint-by-numbers outline — is often non-intuitive. There are few formal guidelines for patterning and those that do exist are insufficient to assure a successful result.

“Quilting has this rich tradition and people make these very personal, cherished heirlooms but paper piece quilting often requires that people work from patterns that other people designed,” said Leake, who is a member of the lab of Maneesh Agrawala, the Forest Baskett Professor of Computer Science and director of the Brown Institute for Media Innovation at Stanford. “So, we wanted to produce a digital tool that lets people design the patterns that they want to design without having to think through all of the geometry, ordering and constraints.”

A paper describing this work is published and will be presented at the computer graphics conference SIGGRAPH 2021 in August.

A June 2, 2021 Stanford University news release (also on EurekAlert), which originated the news item, provides more detail,

Respecting the craft

In describing the allure of paper piece quilts, Leake cites the modern aesthetic and high level of control and precision. The seams of the quilt are sewn through the paper pattern and, as the seaming process proceeds, the individual pieces of fabric are flipped over to form the final design. All of this “sew and flip” action means the pattern must be produced in a careful order.

Poorly executed patterns can lead to loose pieces, holes, misplaced seams and designs that are simply impossible to complete. When quilters create their own paper piecing designs, figuring out the order of the seams can take considerable time – and still lead to unsatisfactory results.

“The biggest challenge that we’re tackling is letting people focus on the creative part and offload the mental energy of figuring out whether they can use this technique or not,” said Leake, who is lead author of the SIGGRAPH paper. “It’s important to me that we’re really aware and respectful of the way that people like to create and that we aren’t over-automating that process.”

This isn’t Leake’s first foray into computer-aided quilting. She previously designed a tool for improvisational quilting, which she presented [PatchProv: Supporting Improvistiional Design Practices for Modern Quilting by Mackenzie Leake, Frances Lai, Tovi Grossman, Daniel Wigdor, and Ben Lafreniere] at the human-computer interaction conference CHI in May [2021]. [Note: Links to the May 2021 conference and paper added by me.]

Quilting theory

Developing the algorithm at the heart of this latest quilting software required a substantial theoretical foundation. With few existing guidelines to go on, the researchers had to first gain a more formal understanding of what makes a quilt paper piece-able, and then represent that mathematically.

They eventually found what they needed in a particular graph structure, called a hypergraph. While so-called “simple” graphs can only connect data points by lines, a hypergraph can accommodate overlapping relationships between many data points. (A Venn diagram is a type of hypergraph.) The researchers found that a pattern will be paper piece-able if it can be depicted by a hypergraph whose edges can be removed one at a time in a specific order – which would correspond to how the seams are sewn in the pattern.

The prototype software allows users to sketch out a design and the underlying hypergraph-based algorithm determines what paper foundation patterns could make it possible – if any. Many designs result in multiple pattern options and users can adjust their sketch until they get a pattern they like. The researchers hope to make a version of their software publicly available this summer.

“I didn’t expect to be writing my computer science dissertation on quilting when I started,” said Leake. “But I found this really rich space of problems involving design and computation and traditional crafts, so there have been lots of different pieces we’ve been able to pull off and examine in that space.”

###

Researchers from University of California, Berkeley and Cornell University are co-authors of this paper. Agrawala is also an affiliate of the Institute for Human-Centered Artificial Intelligence (HAI).

An abstract for the paper “A Mathematical Foundation for Foundation Paper Pieceable Quilts” by Mackenzie Leake, Gilbert Bernstein, Abe Davis and Maneesh Agrawala can be found here along with links to a PDF of the full paper and video on YouTube.

Afterthought: I noticed that all of the co-authors for the May 2021 paper are from the University of Toronto and most of them including Mackenzie Leake are associated with that university’s Chatham Labs.