Monthly Archives: August 2018

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Sexbots, sexbot ethics, families, and marriage

Setting the stage

Can we? Should we? Is this really a good idea? I believe those ships have sailed where sexbots are concerned since the issue is no longer whether we can or should but rather what to do now that we have them. My Oct. 17, 2017 posting: ‘Robots in Vancouver and in Canada (one of two)’ features Harmony, the first (I believe) commercial AI (artificial intelligence)-enhanced sex robot n the US. They were getting ready to start shipping the bot either for Christmas 2017 or in early 2018.

Ethical quandaries?

Things have moved a little more quickly that I would have expected had I thought ahead. An April 5, 2018 essay  (h/t phys.org) by Victoria Brooks, lecturer in law at the University of Westminster (UK) for The Conversation lays out some of ethical issues (Note: Links have been removed),

Late in 2017 at a tech fair in Austria, a sex robot was reportedly “molested” repeatedly and left in a “filthy” state. The robot, named Samantha, received a barrage of male attention, which resulted in her sustaining two broken fingers. This incident confirms worries that the possibility of fully functioning sex robots raises both tantalising possibilities for human desire (by mirroring human/sex-worker relationships), as well as serious ethical questions.

So what should be done? The campaign to “ban” sex robots, as the computer scientist Kate Devlin has argued, is only likely to lead to a lack of discussion. Instead, she hypothesises that many ways of sexual and social inclusivity could be explored as a result of human-robot relationships.

To be sure, there are certain elements of relationships between humans and sex workers that we may not wish to repeat. But to me, it is the ethical aspects of the way we think about human-robot desire that are particularly key.

Why? Because we do not even agree yet on what sex is. Sex can mean lots of different things for different bodies – and the types of joys and sufferings associated with it are radically different for each individual body. We are only just beginning to understand and know these stories. But with Europe’s first sex robot brothel open in Barcelona and the building of “Harmony”, a talking sex robot in California, it is clear that humans are already contemplating imposing our barely understood sexual ethic upon machines.

I think that most of us will experience some discomfort on hearing Samantha’s story. And it’s important that, just because she’s a machine, we do not let ourselves “off the hook” by making her yet another victim and heroine who survived an encounter, only for it to be repeated. Yes, she is a machine, but does this mean it is justifiable to act destructively towards her? Surely the fact that she is in a human form makes her a surface on which human sexuality is projected, and symbolic of a futuristic human sexuality. If this is the case, then Samatha’s [sic] case is especially sad.

It is Devlin who has asked the crucial question: whether sex robots will have rights. “Should we build in the idea of consent,” she asks? In legal terms, this would mean having to recognise the robot as human – such is the limitation of a law made by and for humans.

Suffering is a way of knowing that you, as a body, have come out on the “wrong” side of an ethical dilemma. [emphasis mine] This idea of an “embodied” ethic understood through suffering has been developed on the basis of the work of the famous philosopher Spinoza and is of particular use for legal thinkers. It is useful as it allows us to judge rightness by virtue of the real and personal experience of the body itself, rather than judging by virtue of what we “think” is right in connection with what we assume to be true about their identity.

This helps us with Samantha’s case, since it tells us that in accordance with human desire, it is clear she would not have wanted what she got. The contact Samantha received was distinctly human in the sense that this case mirrors some of the most violent sexual offences cases. While human concepts such as “law” and “ethics” are flawed, we know we don’t want to make others suffer. We are making these robot lovers in our image and we ought not pick and choose whether to be kind to our sexual partners, even when we choose to have relationships outside of the “norm”, or with beings that have a supposedly limited consciousness, or even no (humanly detectable) consciousness.

Brooks makes many interesting points not all of them in the excerpts seen here but one question not raised in the essay is whether or not the bot itself suffered. It’s a point that I imagine proponents of ‘treating your sex bot however you like’ are certain to raise. It’s also a question Canadians may need to answer sooner rather than later now that a ‘sex doll brothel’ is about to open Toronto. However, before getting to that news bit, there’s an interview with a man, his sexbot, and his wife.

The sexbot at home

In fact, I have two interviews the first I’m including here was with CBC (Canadian Broadcasting Corporation) radio and it originally aired October 29, 2017. Here’s a part of the transcript (Note: A link has been removed),

“She’s [Samantha] quite an elegant kind of girl,” says Arran Lee Squire, who is sales director for the company that makes her and also owns one himself.

And unlike other dolls like her, she’ll resist sex if she isn’t in the mood.

“If you touch her, say, on her sensitive spots on the breasts, for example, straight away, and you don’t touch her hands or kiss her, she might say, ‘Oh, I’m not ready for that,'” Arran says.

He says she’ll even synchronize her orgasm to the user’s.

But Arran emphasized that her functions go beyond the bedroom.

Samantha has a “family mode,” in which she can can talk about science, animals and philosophy. She’ll give you motivational quotes if you’re feeling down.

At Arran’s house, Samantha interacts with his two kids. And when they’ve gone to bed, she’ll have sex with him, but only with his wife involved.

There’s also this Sept. 12, 2017 ITV This Morning with Phillip & Holly broadcast interview  (running time: 6 mins. 19 secs.),

I can imagine that if I were a child in that household I’d be tempted to put the sexbot into ‘sexy mode’, preferably unsupervised by my parents. Also, will the parents be using it, at some point, for sex education?

Canadian perspective 1: Sure, it could be good for your marriage

Prior to the potential sex doll brothel in Toronto (more about that coming up), there was a flurry of interest in Marina Adshade’s contribution to the book, Robot Sex: Social and Ethical Implications, from an April 18, 2018 news item on The Tyee,

Sex robots may soon be a reality. However, little research has been done on the social, philosophical, moral and legal implications of robots specifically designed for sexual gratification.

In a chapter written for the book Robot Sex: Social and Ethical Implications, Marina Adshade, professor in the Vancouver School of Economics at the University of British Columbia, argues that sex robots could improve marriage by making it less about sex and more about love.

In this Q&A, Adshade discusses her predictions.

Could sex robots really be a viable replacement for marriage with a human? Can you love a robot?

I don’t see sex robots as substitutes for human companionship but rather as complements to human companionship. Just because we might enjoy the company of robots doesn’t mean that we cannot also enjoy the company of humans, or that having robots won’t enhance our relationships with humans. I see them as very different things — just as one woman (or one man) is not a perfect substitute for another woman (or man).

Is there a need for modern marriage to improve?

We have become increasingly demanding in what we want from the people that we marry. There was a time when women were happy to have a husband that supported the family and men were happy to have a caring mother to his children. Today we still want those things, but we also want so much more — we want lasting sexual compatibility, intense romance, and someone who is an amazing co-parent. That is a lot to ask of one person. …

Adshade adapted part of her text  “Sexbot-Induced Social Change: An Economic Perspective” in Robot Sex: Social and Ethical Implications edited by John Danaher and Neil McArthur for an August 14, 2018 essay on Slate.com,

Technological change invariably brings social change. We know this to be true, but rarely can we make accurate predictions about how social behavior will evolve when new technologies are introduced. …we should expect that the proliferation of robots designed specifically for human sexual gratification means that sexbot-induced social change is on the horizon.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory [I suspect this was an unintended pun]. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

Some elements of that social change might be easier to anticipate than others. For example, the share of the young adult population that chooses to remain single (with their sexual needs met by robots) is very likely to increase. Because social change is organic, however, adaptations in other social norms and behaviors are much more difficult to predict. But this is not virgin territory. New technologies completely transformed sexual behavior and marital norms over the second half of the 20th century. Although getting any of these predictions right will surely involve some luck, we have decades of technology-induced social change to guide our predictions about the future of a world confronted with wholesale access to sexbots.

The reality is that marriage has always evolved alongside changes in technology. Between the mid-1700s and the early 2000s, the role of marriage between a man and a woman was predominately to encourage the efficient production of market goods and services (by men) and household goods and services (by women), since the social capacity to earn a wage was almost always higher for husbands than it was for wives. But starting as early as the end of the 19th century, marriage began to evolve as electrification in the home made women’s work less time-consuming, and new technologies in the workplace started to decrease the gender wage gap. Between 1890 and 1940, the share of married women working in the labor force tripled, and over the course of the century, that share continued to grow as new technologies arrived that replaced the labor of women in the home. By the early 1970s, the arrival of microwave ovens and frozen foods meant that a family could easily be fed at the end of a long workday, even when the mother worked outside of the home.

There are those who argue that men only “assume the burden” of marriage because marriage allows men easy sexual access, and that if men can find sex elsewhere they won’t marry. We hear this prediction now being made in reference to sexbots, but the same argument was given a century ago when the invention of the latex condom (1912) and the intrauterine device (1909) significantly increased people’s freedom to have sex without risking pregnancy and (importantly, in an era in which syphilis was rampant) sexually transmitted disease. Cosmopolitan magazine ran a piece at the time by John B. Watson that asked the blunt question, will men marry 50 years from now? Watson’s answer was a resounding no, writing that “we don’t want helpmates anymore, we want playmates.” Social commentators warned that birth control technologies would destroy marriage by removing the incentives women had to remain chaste and encourage them to flood the market with nonmarital sex. Men would have no incentive to marry, and women, whose only asset is sexual access, would be left destitute.

Fascinating, non? Should you be interested, “Sexbot-Induced Social Change: An Economic Perspective” by Marina Adshade  can be found in Robot Sex: Social and Ethical Implications (link to Amazon) edited by John Danaher and Neil McArthur. © 2017 by the Massachusetts Institute of Technology, reprinted courtesy of the MIT Press

Canadian perspective 2: What is a sex doll brothel doing in Toronto?

Sometimes known as Toronto the Good (although not recently; find out more about Toronto and its nicknames here) and once a byword for stodginess, the city is about to welcome a sex doll brothel according to an August 28, 2018 CBC Radio news item by Katie Geleff and John McGill,

On their website, Aura Dolls claims to be, “North America’s first known brothel that offers sexual services with the world’s most beautiful silicone ladies.”

Nestled between a massage parlour, nail salon and dry cleaner, Aura Dolls is slated to open on Sept. 8 [2018] in an otherwise nondescript plaza in Toronto’s north end.

The company plans to operate 24 hours a day, seven days a week, and will offer customers six different silicone dolls. The website describes the life-like dolls as, “classy, sophisticated, and adventurous ladies.” …

They add that, “the dolls are thoroughly sanitized to meet your expectations.” But that condoms are still “highly recommended.”

Toronto city councillor John Filion says people in his community are concerned about the proposed business.

Filion spoke to As It Happens guest host Helen Mann. Here is part of their conversation.

Councillor Filion, Aura Dolls is urging people to have “an open mind” about their business plan. Would you say that you have one?

Well, I have an open mind about what sort of behaviours people want to do, as long as they don’t harm anybody else. It’s a totally different matter once you bring that out to the public. So I think I have a fairly closed mind about where people should be having sex with [silicone] dolls.

So, what’s wrong with a sex doll brothel?

It’s where it is located, for one thing. Where it’s being proposed happens to be near an intersection where about 25,000 people live, all kinds of families, four elementary schools are very near by. And you know, people shouldn’t really need to be out on a walk with their families and try to explain to their kids why someone is having sex with a [silicone] doll.

But Aura Dolls says that they are going to be doing this very discreetly, that they won’t have explicit signage, and that they therefore won’t be bothering anyone.

They’ve hardly been discreet. They were putting illegal posters all over the neighbourhood. They’ve probably had a couple of hundred of thousands of dollars of free publicity already. I don’t think there’s anything at all discreet about what they are doing. They’re trying to be indiscreet to drum up business.

Can you be sure that there aren’t constituents in your area that think this is a great idea?

I can’t be sure that there aren’t some people who might think, “Oh great, it’s just down the street from me. Let me go there.” I would say that might be a fraction of one per cent of my constituents. Most people are appalled by this.

And it’s not a narrow-minded neighbourhood. Whatever somebody does in their home, I don’t think we’re going to pass moral judgment on it, again, as long as it’s not harming anyone else. But this is just kind of scuzzy. ..

….

Aura Dolls says that it’s doing nothing illegal. They say that they are being very clear that the dolls they are using represent adult women and that they are actually providing a service. Do you agree that they are doing this legally?

No, they’re not at all legal. It’s an illegal use. And if there’s any confusion about that, they will be getting a letter from the city very soon. It is clearly not a legal use. It’s not permitted under the zoning bylaw and it fits the definition of adult entertainment parlour, for which you require a license — and they certainly would not get one. They would not get a license in this neighbourhood because it’s not a permitted use.

The audio portion runs for 5 mins. 31 secs.

I believe these dolls are in fact sexbots, likely enhanced with AI. An August 29, 2018 article by Karlton Jahmal for hotnewhiphop.com describes the dolls as ‘fembots’ and provides more detail (Note: Links have been removed),

Toronto has seen the future, and apparently, it has to do with sex dolls. The Six [another Toronto nickname] is about to get blessed with the first legal sex doll brothel, and the fembots look too good to be true. If you head over to Aura Dolls website, detailed biographies for the six available sex dolls are on full display. You can check out the doll’s height, physical dimensions, heritage and more.

Aura plans to introduce more dolls in the future, according to a statement in the Toronto Star by Claire Lee, a representative for the compnay. At the moment, the ethnicities of the sex dolls feature Japanese, Caucasian American, French Canadian, Irish Canadian, Colombian, and Korean girls. Male dolls will be added in the near future. The sex dolls look remarkably realistic. Aura’s website writes, “Our dolls are made from the highest quality of TPE silicone which mimics the feeling of natural human skin, pores, texture and movement giving the user a virtually identical experience as being with a real partner.”

There are a few more details about the proposed brothel and more comments from Toronto city councillor John Filion in an August 28, 2018 article by Claire Floody and Jenna Moon with Alexandra Jones and Melanie Green for thestar.com,

Toronto will soon be home to North America’s [this should include Canada, US, and Mexico] first known sex doll brothel, offering sexual services with six silicone-made dolls.

According to the website for Aura Dolls, the company behind the brothel, the vision is to bring a new way to achieve sexual needs “without the many restrictions and limitations that a real partner may come with.”

The brothel is expected to open in a shopping plaza on Yonge St., south of Sheppard Ave., on Sept. 8 [2018]. The company doesn’t give the exact location on its website, stating it’s announced upon booking.

Spending half an hour with one doll costs $80, with two dolls running $160. For an hour, the cost is $120 with one doll. The maximum listed time is four hours for $480 per doll.

Doors at the new brothel for separate entry and exit will be used to ensure “maximum privacy for customers.” While the business does plan on having staff on-site, they “should not have any interaction,” Lee said.

“The reason why we do that is to make sure that everyone feels comfortable coming in and exiting,” she said, noting that people may feel shy or awkward about visiting the site.

… Lee said that the business is operating within the law. “The only law stating with anything to do with the dolls is that it has to meet a height requirement. It can’t resemble a child,” she said. …

Councillor John Filion, Ward 23 Willowdale, said his staff will be “throwing the book at (Aura Dolls) for everything they can.”

“I’ve still got people studying to see what’s legal and what isn’t,” Filion said. He noted that a bylaw introduced in North York in the ’90s prevents retail sex shops operating outside of industrial areas. Filion said his office is still confirming that the bylaw is active following harmonization, which condensed the six boroughs’ bylaws after amalgamation in 1998.

“If the bylaw that I brought in 20 years ago still exists, it would prohibit this,” Filion said.

“There’s legal issues,” he said, suggesting that people interested in using the sex dolls might consider doing so at home, rather than at a brothel.

The councillor said he’s received complaints from constituents about the business. “The phone’s ringing off the hook today,” Filion said.

It should be an interesting first week at school for everyone involved. I wonder what Ontario Premier, Doug Ford who recently rolled back the sex education curriculum for the province by 20 years will make of these developments.

As for sexbots/fembots/sex dolls or whatever you want to call them, they are here and it’s about time Canadians had a frank discussion on the matter. Also, I’ve been waiting for quite some time for any mention of male sexbots (malebots?). Personally, I don’t think we’ll be seeing male sexbots appear in either brothels or homes anytime soon.

Create gold nanoparticles and nanowires with water droplets.

For some reason it took a lot longer than usual to find this research paper despite having the journal (Nature Communications), the title (Spontaneous formation …), and the authors’ names. Thankfully, success was wrested from the jaws of defeat (I don’t care if that is trite; it’s how I felt) and links, etc. follow at the end as usual.

An April 19, 2018 Stanford University news release (also on EurekAlert) spins fascinating tale,

An experiment that, by design, was not supposed to turn up anything of note instead produced a “bewildering” surprise, according to the Stanford scientists who made the discovery: a new way of creating gold nanoparticles and nanowires using water droplets.

The technique, detailed April 19 [2018] in the journal Nature Communications, is the latest discovery in the new field of on-droplet chemistry and could lead to more environmentally friendly ways to produce nanoparticles of gold and other metals, said study leader Richard Zare, a chemist in the School of Humanities and Sciences and a co-founder of Stanford Bio-X.

“Being able to do reactions in water means you don’t have to worry about contamination. It’s green chemistry,” said Zare, who is the Marguerite Blake Wilbur Professor in Natural Science at Stanford.

Noble metal

Gold is known as a noble metal because it is relatively unreactive. Unlike base metals such as nickel and copper, gold is resistant to corrosion and oxidation, which is one reason it is such a popular metal for jewelry.

Around the mid-1980s, however, scientists discovered that gold’s chemical aloofness only manifests at large, or macroscopic, scales. At the nanometer scale, gold particles are very chemically reactive and make excellent catalysts. Today, gold nanostructures have found a role in a wide variety of applications, including bio-imaging, drug delivery, toxic gas detection and biosensors.

Until now, however, the only reliable way to make gold nanoparticles was to combine the gold precursor chloroauric acid with a reducing agent such as sodium borohydride.

The reaction transfers electrons from the reducing agent to the chloroauric acid, liberating gold atoms in the process. Depending on how the gold atoms then clump together, they can form nano-size beads, wires, rods, prisms and more.

A spritz of gold

Recently, Zare and his colleagues wondered whether this gold-producing reaction would proceed any differently with tiny, micron-size droplets of chloroauric acid and sodium borohydide. How large is a microdroplet? “It is like squeezing a perfume bottle and out spritzes a mist of microdroplets,” Zare said.

From previous experiments, the scientists knew that some chemical reactions proceed much faster in microdroplets than in larger solution volumes.

Indeed, the team observed that gold nanoparticle grew over 100,000 times faster in microdroplets. However, the most striking observation came while running a control experiment in which they replaced the reducing agent – which ordinarily releases the gold particles – with microdroplets of water.

“Much to our bewilderment, we found that gold nanostructures could be made without any added reducing agents,” said study first author Jae Kyoo Lee, a research associate.

Viewed under an electron microscope, the gold nanoparticles and nanowires appear fused together like berry clusters on a branch.

The surprise finding means that pure water microdroplets can serve as microreactors for the production of gold nanostructures. “This is yet more evidence that reactions in water droplets can be fundamentally different from those in bulk water,” said study coauthor Devleena Samanta, a former graduate student in Zare’s lab and co-author on the paper.

If the process can be scaled up, it could eliminate the need for potentially toxic reducing agents that have harmful health side effects or that can pollute waterways, Zare said.

It’s still unclear why water microdroplets are able to replace a reducing agent in this reaction. One possibility is that transforming the water into microdroplets greatly increases its surface area, creating the opportunity for a strong electric field to form at the air-water interface, which may promote the formation of gold nanoparticles and nanowires.

“The surface area atop a one-liter beaker of water is less than one square meter. But if you turn the water in that beaker into microdroplets, you will get about 3,000 square meters of surface area – about the size of half a football field,” Zare said.

The team is exploring ways to utilize the nanostructures for various catalytic and biomedical applications and to refine their technique to create gold films.

“We observed a network of nanowires that may allow the formation of a thin layer of nanowires,” Samanta said.

Here’s a link and a citation for the paper,

Spontaneous formation of gold nanostructures in aqueous microdroplets by Jae Kyoo Lee, Devleena Samanta, Hong Gil Nam, & Richard N. Zare. Nature Communicationsvolume 9, Article number: 1562 (2018) doi:10.1038/s41467-018-04023-z Published online: 19 April 2018

Not unsurprisingly given Zare’s history as recounted in the news release, this paper is open access.

When nanoparticles collide

The science of collisions, although it looks more like kissing to me, at the nanoscale could lead to some helpful discoveries according to an April 5, 2018 news item on Nanowerk,

Helmets that do a better job of preventing concussions and other brain injuries. Earphones that protect people from damaging noises. Devices that convert “junk” energy from airport runway vibrations into usable power.

New research on the events that occur when tiny specks of matter called nanoparticles smash into each other could one day inform the development of such technologies.

Before getting to the news release proper, here’s a gif released by the university,

A digital reconstruction shows how individual atoms in two largely spherical nanoparticles react when the nanoparticles collide in a vacuum. In the reconstruction, the atoms turn blue when they are in contact with the opposing nanoparticle. Credit: Yoichi Takato

An April 4, 2018 University at Buffalo news release (also on EurekAlert) by Charlotte Hsu, which originated the news item, fills in some details,

Using supercomputers, scientists led by the University at Buffalo modeled what happens when two nanoparticles collide in a vacuum. The team ran simulations for nanoparticles with three different surface geometries: those that are largely circular (with smooth exteriors); those with crystal facets; and those that possess sharp edges.

“Our goal was to lay out the forces that control energy transport at the nanoscale,” says study co-author Surajit Sen, PhD, professor of physics in UB’s College of Arts and Sciences. “When you have a tiny particle that’s 10, 20 or 50 atoms across, does it still behave the same way as larger particles, or grains? That’s the guts of the question we asked.”

“The guts of the answer,” Sen adds, “is yes and no.”

“Our research is useful because it builds the foundation for designing materials that either transmit or absorb energy in desired ways,” says first author Yoichi Takato, PhD. Takato, a physicist at AGC Asahi Glass and former postdoctoral scholar at the Okinawa Institute of Science and Technology in Japan, completed much of the study as a doctoral candidate in physics at UB. “For example, you could potentially make an ultrathin material that is energy absorbent. You could imagine that this would be practical for use in helmets and head gear that can help to prevent head and combat injuries.”

The study was published on March 21 in Proceedings of the Royal Society A by Takato, Sen and Michael E. Benson, who completed his portion of the work as an undergraduate physics student at UB. The scientists ran their simulations at the Center for Computational Research, UB’s academic supercomputing facility.

What happens when nanoparticles crash

The new research focused on small nanoparticles — those with diameters of 5 to 15 nanometers. The scientists found that in collisions, particles of this size behave differently depending on their shape.

For example, nanoparticles with crystal facets transfer energy well when they crash into each other, making them an ideal component of materials designed to harvest energy. When it comes to energy transport, these particles adhere to scientific norms that govern macroscopic linear systems — including chains of equal-sized masses with springs in between them — that are visible to the naked eye.

In contrast, nanoparticles that are rounder in shape, with amorphous surfaces, adhere to nonlinear force laws. This, in turn, means they may be especially useful for shock mitigation. When two spherical nanoparticles collide, energy dissipates around the initial point of contact on each one instead of propagating all the way through both. The scientists report that at crash velocities of about 30 meters per second, atoms within each particle shift only near the initial point of contact.

Nanoparticles with sharp edges are less predictable: According to the new study, their behavior varies depending on sharpness of the edges when it comes to transporting energy.
Designing a new generation of materials

“From a very broad perspective, the kind of work we’re doing has very exciting prospects,” Sen says. “It gives engineers fundamental information about nanoparticles that they didn’t have before. If you’re designing a new type of nanoparticle, you can now think about doing it in a way that takes into account what happens when you have very small nanoparticles interacting with each other.”

Though many scientists are working with nanotechnology, the way the tiniest of nanoparticles behave when they crash into each other is largely an open question, Takato says.

“When you’re designing a material, what size do you want the nanoparticle to be? How will you lay out the particles within the material? How compact do you want it to be? Our study can inform these decisions,” Takato says.

Here’s a link to and a citation for the paper,

Small nanoparticles, surface geometry and contact forces by Yoichi Takato, Michael E. Benson, Surajit Sen. Proceedings of the Royal Society A (Mathematical, Physical, and Engineering Sciences) Published 21 March 2018.DOI: 10.1098/rspa.2017.0723

This paper is behind a paywall.

An artificial enzyme uses light to kill bacteria

An April 4, 2018 news item on ScienceDaily announces a light-based approach to killing bacteria,

Researchers from RMIT University [Australia] have developed a new artificial enzyme that uses light to kill bacteria.

The artificial enzymes could one day be used in the fight against infections, and to keep high-risk public spaces like hospitals free of bacteria like E. coli and Golden Staph.

E. coli can cause dysentery and gastroenteritis, while Golden Staph is the major cause of hospital-acquired secondary infections and chronic wound infections.

Made from tiny nanorods — 1000 times smaller than the thickness of the human hair — the “NanoZymes” use visible light to create highly reactive oxygen species that rapidly break down and kill bacteria.

Lead researcher, Professor Vipul Bansal who is an Australian Future Fellow and Director of RMIT’s Sir Ian Potter NanoBioSensing Facility, said the new NanoZymes offer a major cutting edge over nature’s ability to kill bacteria.

Dead bacteria made beautiful,

Caption: A 3-D rendering of dead bacteria after it has come into contact with the NanoZymes.
Credit: Dr. Chaitali Dekiwadia/ RMIT Microscopy and Microanalysis Facility

An April 5, 2018 RMIT University press release (also on EurekAlert but dated April 4, 2018), which originated the news item, expands on the theme,

“For a number of years we have been attempting to develop artificial enzymes that can fight bacteria, while also offering opportunities to control bacterial infections using external ‘triggers’ and ‘stimuli’,” Bansal said. “Now we have finally cracked it.

“Our NanoZymes are artificial enzymes that combine light with moisture to cause a biochemical reaction that produces OH radicals and breaks down bacteria. Nature’s antibacterial activity does not respond to external triggers such as light.

“We have shown that when shined upon with a flash of white light, the activity of our NanoZymes increases by over 20 times, forming holes in bacterial cells and killing them efficiently.

“This next generation of nanomaterials are likely to offer new opportunities in bacteria free surfaces and controlling spread of infections in public hospitals.”

The NanoZymes work in a solution that mimics the fluid in a wound. This solution could be sprayed onto surfaces.

The NanoZymes are also produced as powders to mix with paints, ceramics and other consumer products. This could mean bacteria-free walls and surfaces in hospitals.

Public toilets — places with high levels of bacteria, and in particular E. coli — are also a prime location for the NanoZymes, and the researchers believe their new technology may even have the potential to create self-cleaning toilet bowls.

While the NanoZymes currently use visible light from torches or similar light sources, in the future they could be activated by sunlight.

The researchers have shown that the NanoZymes work in a lab environment. The team is now evaluating the long-term performance of the NanoZymes in consumer products.

“The next step will be to validate the bacteria killing and wound healing ability of these NanoZymes outside of the lab,” Bansal said.

“This NanoZyme technology has huge potential, and we are seeking interest from appropriate industries for joint product development.”

Here’s a link to and a citation for the paper,

Visible-Light-Triggered Reactive-Oxygen-Species-Mediated Antibacterial Activity of Peroxidase-Mimic CuO Nanorods by Md. Nurul Karim, Mandeep Singh, Pabudi Weerathunge, Pengju Bian, Rongkun Zheng, Chaitali Dekiwadia, Taimur Ahmed, Sumeet Walia, Enrico Della Gaspera, Sanjay Singh, Rajesh Ramanathan, and Vipul Bansal. ACS Appl. Nano Mater., Article ASAP DOI: 10.1021/acsanm.8b00153 Publication Date (Web): March 6, 2018

Copyright © 2018 American Chemical Society

This paper is open access.

D-Wave and the first large-scale quantum simulation of a* topological state of matter

This is all about a local (Burnaby is one of the metro Vancouver municipalities) quantum computing companies, D-Wave Systems. The company has been featured here from time to time. It’s usually about about their quantum technology (they are considered a technology star in local and [I think] other circles) but my March 9, 2018 posting about the SXSW (South by Southwest) festival noted that Bo Ewald, President, D-Wave Systems US, was a member of the ‘Quantum Computing: Science Fiction to Science Fact’ panel.

Now, they’re back making technology announcements like this August 22, 2018 news item on phys.org (Note: Links have been removed),

D-Wave Systems today [August 22, 2018] published a milestone study demonstrating a topological phase transition using its 2048-qubit annealing quantum computer. This complex quantum simulation of materials is a major step toward reducing the need for time-consuming and expensive physical research and development.

The paper, entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits”, was published in the peer-reviewed journal Nature. This work marks an important advancement in the field and demonstrates again that the fully programmable D-Wave quantum computer can be used as an accurate simulator of quantum systems at a large scale. The methods used in this work could have broad implications in the development of novel materials, realizing Richard Feynman’s original vision of a quantum simulator. This new research comes on the heels of D-Wave’s recent Science paper demonstrating a different type of phase transition in a quantum spin-glass simulation. The two papers together signify the flexibility and versatility of the D-Wave quantum computer in quantum simulation of materials, in addition to other tasks such as optimization and machine learning.

An August 22, 2108 D-Wave Systems news release (also on EurekAlert), which originated the news item, delves further (Note: A link has been removed),

In the early 1970s, theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless predicted a new state of matter characterized by nontrivial topological properties. The work was awarded the Nobel Prize in Physics in 2016. D-Wave researchers demonstrated this phenomenon by programming the D-Wave 2000Q™ system to form a two-dimensional frustrated lattice of artificial spins. The observed topological properties in the simulated system cannot exist without quantum effects and closely agree with theoretical predictions.

“This paper represents a breakthrough in the simulation of physical systems which are otherwise essentially impossible,” said 2016 Nobel laureate Dr. J. Michael Kosterlitz. “The test reproduces most of the expected results, which is a remarkable achievement. This gives hope that future quantum simulators will be able to explore more complex and poorly understood systems so that one can trust the simulation results in quantitative detail as a model of a physical system. I look forward to seeing future applications of this simulation method.”

“The work described in the Nature paper represents a landmark in the field of quantum computation: for the first time, a theoretically predicted state of matter was realized in quantum simulation before being demonstrated in a real magnetic material,” said Dr. Mohammad Amin, chief scientist at D-Wave. “This is a significant step toward reaching the goal of quantum simulation, enabling the study of material properties before making them in the lab, a process that today can be very costly and time consuming.”

“Successfully demonstrating physics of Nobel Prize-winning importance on a D-Wave quantum computer is a significant achievement in and of itself. But in combination with D-Wave’s recent quantum simulation work published in Science, this new research demonstrates the flexibility and programmability of our system to tackle recognized, difficult problems in a variety of areas,” said Vern Brownell, D-Wave CEO.

“D-Wave’s quantum simulation of the Kosterlitz-Thouless transition is an exciting and impactful result. It not only contributes to our understanding of important problems in quantum magnetism, but also demonstrates solving a computationally hard problem with a novel and efficient mapping of the spin system, requiring only a limited number of qubits and opening new possibilities for solving a broader range of applications,” said Dr. John Sarrao, principal associate director for science, technology, and engineering at Los Alamos National Laboratory.

“The ability to demonstrate two very different quantum simulations, as we reported in Science and Nature, using the same quantum processor, illustrates the programmability and flexibility of D-Wave’s quantum computer,” said Dr. Andrew King, principal investigator for this work at D-Wave. “This programmability and flexibility were two key ingredients in Richard Feynman’s original vision of a quantum simulator and open up the possibility of predicting the behavior of more complex engineered quantum systems in the future.”

The achievements presented in Nature and Science join D-Wave’s continued work with world-class customers and partners on real-world prototype applications (“proto-apps”) across a variety of fields. The 70+ proto-apps developed by customers span optimization, machine learning, quantum material science, cybersecurity, and more. Many of the proto-apps’ results show that D-Wave systems are approaching, and sometimes surpassing, conventional computing in terms of performance or solution quality on real problems, at pre-commercial scale. As the power of D-Wave systems and software expands, these proto-apps point to the potential for scaled customer application advantage on quantum computers.

The company has prepared a video describing Richard Feynman’s proposal about quantum computing and celebrating their latest achievement,

Here’s the company’s Youtube video description,

In 1982, Richard Feynman proposed the idea of simulating the quantum physics of complex systems with a programmable quantum computer. In August 2018, his vision was realized when researchers from D-Wave Systems and the Vector Institute demonstrated the simulation of a topological phase transition—the subject of the 2016 Nobel Prize in Physics—in a fully programmable D-Wave 2000Q™ annealing quantum computer. This complex quantum simulation of materials is a major step toward reducing the need for time-consuming and expensive physical research and development.

You may want to check out the comments in response to the video.

Here’s a link to and a citation for the Nature paper,

Observation of topological phenomena in a programmable lattice of 1,800 qubits by Andrew D. King, Juan Carrasquilla, Jack Raymond, Isil Ozfidan, Evgeny Andriyash, Andrew Berkley, Mauricio Reis, Trevor Lanting, Richard Harris, Fabio Altomare, Kelly Boothby, Paul I. Bunyk, Colin Enderud, Alexandre Fréchette, Emile Hoskinson, Nicolas Ladizinsky, Travis Oh, Gabriel Poulin-Lamarre, Christopher Rich, Yuki Sato, Anatoly Yu. Smirnov, Loren J. Swenson, Mark H. Volkmann, Jed Whittaker, Jason Yao, Eric Ladizinsky, Mark W. Johnson, Jeremy Hilton, & Mohammad H. Amin. Nature volume 560, pages456–460 (2018) DOI: https://doi.org/10.1038/s41586-018-0410-x Published 22 August 2018

This paper is behind a paywall but, for those who don’t have access, there is a synopsis here.

For anyone curious about the earlier paper published in July 2018, here’s a link and a citation,

Phase transitions in a programmable quantum spin glass simulator by R. Harris, Y. Sato, A. J. Berkley, M. Reis, F. Altomare, M. H. Amin, K. Boothby, P. Bunyk, C. Deng, C. Enderud, S. Huang, E. Hoskinson, M. W. Johnson, E. Ladizinsky, N. Ladizinsky, T. Lanting, R. Li, T. Medina, R. Molavi, R. Neufeld, T. Oh, I. Pavlov, I. Perminov, G. Poulin-Lamarre, C. Rich, A. Smirnov, L. Swenson, N. Tsai, M. Volkmann, J. Whittaker, J. Yao. Science 13 Jul 2018: Vol. 361, Issue 6398, pp. 162-165 DOI: 10.1126/science.aat2025

This paper too is behind a paywall.

You can find out more about D-Wave here.

*ETA ‘a’ to the post title on February 24, 2021.

Café Scientifique Vancouver (Canada) talk on August 28th 2018: Getting the message: What is gene expression and why does it matter?

Here’s more about the latest Café Scientifique talk from an August  22, 2018 announcement received via email,

Our next café will happen on TUESDAY, AUGUST 28TH at 7:30PM in the back
room at YAGGER'S DOWNTOWN (433 W Pender [St., Vancouver]). Our speaker for the
evening will be DR. KATIE MARSHALL from the Department of Zoology at
UBC [University of British Columbia]. Her topic will be:

GETTING THE MESSAGE: WHAT IS GENE EXPRESSION AND WHY DOES IT MATTER?

Many of us think that DNA is like a light switch; you have a particular
sequence of base pairs or a particular chromosome, and these directly
cause a large change in biological functioning. But the truth is that
any given gene can be up or downregulated through a dizzying array of
biochemical “dimmer switches” that finely control how much that
particular gene is expressed. Understanding how this works is key to
answering questions like “How does a sequence of base pairs in DNA
become a whole organism?” and “Why is it that every cell has the
same DNA sequence but different function?”. We’ll chat about the
advances in computing needed to answer these questions, the importance
of gene expression in disease, and how this science can help us
understand social issues better too.

I wasn’t able to find out too much more about Dr. Katie but there is this profile page on the UBC Zoology Department website,

The long-term goal of my research is to understand how abiotic stress filters through physiology to shape species abundance and distribution. While abiotic stressors such as temperature have been used very successfully to predict population growth, distribution, and diversity of insect species, integration of the mechanisms of how these stressors are experienced by individuals from alteration of physiology through to fitness impacts has lagged. Inclusion of these mechanisms is crucial for accurate modelling predictions of individual (and therefore population-level) responses. My research to date has focused on how the impact of frequency of stress (rather than the duration or intensity of stress) is a superior predictor of both survival and reproductive success , and used insect cold tolerance as a model system.

At UBC I’ll be focusing on the cold tolerance and cryobiology of invertebrates in the intertidal. These organisms face freezing stress through the winter, yet remarkably little is known about how they do so. I’ll also be investigating plasticity in cold tolerance by looking for interactive effects of ocean acidification and community composition on thermal tolerance.

Enjoy!

Shipwrecks being brought back to life with ‘smart nanotech’

The American Chemical Society (ACS) is holding its 256th meeting from August 19 – 22, 2018 in Boston, Massachusetts, US. This August 21, 2018 news item on Nanowerk announces a ‘shipwreck’ presentation at the meeting,

Thousands of shipwrecks litter the seafloor all over the world, preserved in sediments and cold water. But when one of these ships is brought up from the depths, the wood quickly starts deteriorating. Today, scientists report a new way to use “smart” nanocomposites to conserve a 16th-century British warship, the Mary Rose, and its artifacts. The new approach could help preserve other salvaged ships by eliminating harmful acids without damaging the wooden structures themselves.

An August 21, 2018 ACS press release (also on EurekAlert), which originated the news item, delves further into the research and scientists’ after hours (?) activities,

“This project began over a glass of wine with Eleanor Schofield, Ph.D., who is head of conservation at the Mary Rose Trust,” recalls Serena Corr, Ph.D., the project’s principal investigator. “She was working on techniques to preserve the wood hull and assorted artifacts and needed a way to direct the treatment into the wood. We had been working with functional magnetic nanomaterials for applications in imaging, and we thought we might be able to apply this technology to the Mary Rose.”

The Mary Rose sank in 1545 off the south coast of England and remained under the seabed until she was salvaged in 1982, along with over 19,000 artifacts and pieces of timber. About 40 percent of the original structure survived. The ship and its artifacts give unique insights into Tudor seafaring and what it was like to live during that period. A state-of-the-art museum in Portsmouth, England, displays the ship’s hull and artifacts. A video about the ship and its artifacts can be viewed here.

While buried in the seabed, sulfur-reducing marine bacteria migrated into the wood of the Mary Rose and produced hydrogen sulfide. This gas reacted with iron ions from corroded fixtures like cannons to form iron sulfides. Although stable in low-oxygen environments, sulfur rapidly oxidizes in regular air in the presence of iron to form destructive acids. Corr’s goal was to avoid acid production by removing the free iron ions.

Once raised from the seabed, the ship was sprayed with cold water, which stopped it from drying out and prevented further microbial activity. The conservation team then sprayed the hull with different types of polyethylene glycol (PEG), a common polymer with a wide range of applications, to replace the water in the cellular structure of the wood and strengthen its outer layer.

Corr and her postdoctoral fellow Esther Rani Aluri, Ph.D., and Ph.D. candidate Enrique Sanchez at the University of Glasgow are devising a new family of tiny magnetic nanoparticles to aid in this process, in collaboration with Schofield and Rachel O’Reilly, Ph.D., at the University of Warwick. In their initial step, the team, led by Schofield, used synchrotron techniques to probe the nature of the sulfur species before turning the PEG sprays off, and then periodically as the ship dried. This was the first real-time experiment to closely examine  the evolution of oxidized sulfur and iron species. This accomplishment has informed efforts to design new targeted treatments for the removal of these harmful species from the Mary Rose wood.

The next step will be to use a nanocomposite based on core magnetic iron oxide nanoparticles that include agents on their surfaces that can remove the ions. The nanoparticles can be directly applied to the porous wood structure and guided to particular areas of the wood using external magnetic fields, a technique previously demonstrated for drug delivery. The nanocomposite will be encompassed in a heat-responsive polymer that protects the nanoparticles and provides a way to safely deliver them to and from the wood surface. A major advantage of this approach is that it allows for the complete removal of free iron and sulfate ions from the wood, and these nanocomposites can be tuned by tweaking their surfaces.

With this understanding, Corr notes, “Conservators will have, for the first time, a state-of-the-art quantitative and restorative method for the safe and rapid treatment of wooden artifacts. We plan to then transfer this technology to other materials recovered from the Mary Rose, such as textiles and leather.”

The researchers acknowledge funding from the Mary Rose Trust and the Leverhulme Trust.

There is a video about the Mary Rose produced by Agence France Presse (AFP) and published on Youtube in May 2013,

Here’s the text from AFP Mary Rose entry on Youtube,

The relics from the Mary Rose, the flagship of England’s navy when it sank in 1545 as a heartbroken king Henry VIII watched from the shore, have finally been reunited with the famous wreck in a new museum offering a view of life in Tudor times. Duration: 02:35

One more thing: Canadian shipwrecks

We don’t have a ‘Henry VIII’ story or ‘smart nano and shipwrecks’ story but we do have a federal agency devoted to underwater archaeology, Parks Canada Underwater Archaeology webpage,

Underwater archaeology deals with archaeological sites found below the surface of oceans, rivers, and lakes and on the foreshore. In addition to shipwrecks, underwater archaeologists study submerged aboriginal sites such as fish weirs and middens; remains of historic structures such as wharves, canal locks, and marine railways; sunken aircraft; and other submerged cultural heritage resources.

Underwater archaeology shares the same methodology and principles as archaeology carried out on land sites. All archaeology involves the careful study of artefacts, structures and features to reconstruct and explain the lives of people in the past. However, because it is carried out in a more challenging environment, underwater archaeological fieldwork is more complex than land archaeology.

Specialized techniques and equipment are required to work productively underwater. Staying warm during long dives is a constant concern, so underwater archaeologists often use masks that cover their entire faces, dry suits worn over layers of warm clothing, or in cases where the water is extremely cold, such as the excavation in Red Bay (Labrador), wet suits supplied with a flow of hot water. Underwater communication systems are used to talk to people on the surface or to other divers. Removing sediments covering underwater sites requires the controlled use of specially designed equipment such as suction airlifts and small dredges. Recording information underwater presents its own challenges. Special underwater paper is used for notes and drawings, while photo and video cameras are placed in waterproof housings.

Underwater archaeological fieldwork includes remote-sensing surveys using geophysical techniques, diving surveys to locate and map sites, site monitoring, and excavation. The success of an underwater archaeological project rests on accurate documentation of all aspects of the process. Meticulous mapping and recording are particularly essential when excavation is required, as artefacts and other physical evidence are permanently removed from their original contexts. Archaeologists aim to be able to reconstruct the entire site from the records they generate during fieldwork.

Underwater archeology with Marc-André Bernier

Current position:00:00:00

Total time:00:02:27

There’s also a podcast interview with Marc-André Bernier where he discusses an important Canadian shipwreck, from the Library and Archives Canada, Underwater Canada: Investigating Shipwrecks webpage (podcast length 27:25), here’s the transcript for those who prefer reading,

Shipwrecks have stirred up interest in Canada’s maritime heritage for many decades. 2014 marks the 100th anniversary of the sinking of the Empress of Ireland, one of Canada’s worst maritime disasters.

In this episode, Marc-André Bernier, Chief of Parks Canada’s Underwater Archaeology Service, joins us to discuss shipwrecks, their importance in Canadian history, and how LAC plays an important role in researching, discovering and investigating them.

Podcast Transcript

Underwater Canada: Investigating Shipwrecks

Jessica Ouvrard: Welcome to “Discover Library and Archives Canada: Your History, Your Documentary Heritage.” I’m your host, Jessica Ouvrard. Join us as we showcase the treasures from our vaults; guide you through our many services; and introduce you to the people who acquire, safeguard and make known Canada’s documentary heritage.

Canada has a rich maritime history filled with many tragedies, from small boats [lost] in the Great Lakes, to the sinking of the Empress of Ireland in the St. Lawrence River, to Sir John Franklin’s doomed expeditions in the Arctic. The shipwrecks capture our imaginations and evoke images of tragedy, heroism, mystery and discovery. 2014 also marks the 100th anniversary of the sinking of the Empress of Ireland.

Marc-André Bernier, Chief of Parks Canada’s Underwater Archaeology Service, is joining us to discuss shipwrecks and their significance in Canada’s history, and LAC’s important role in the research, discovery and investigation of these shipwrecks.

Hello, Marc-André Bernier. Thank you for coming today.

Marc-André Bernier: My pleasure. Hello to you.

JO: For those who don’t know much about underwater archaeology, can you explain what it is and the risks and challenges that it presents?

MAB: I’ll start with the challenges rather than the risks, because there are obviously risks, but we try to minimize them. Diving is inherently risky. But I’ll start with the challenges because they are, to a certain extent, what characterize underwater archaeology.

We face a series of challenges that are more complicated, that make our work much more complicated than terrestrial archaeology. We work on water and underwater, and our working conditions are dictated by what happens outside, by nature. We can’t work every day on the water, especially if our work involves the sea or the ocean, for example. And when we work underwater, we have to deal with constraints in terms of time and sometimes visibility. That means that we have to be extremely well organized. Preparation is crucial. Logistics are crucial.

In terms of preparation, we need to properly prepare our research using archives and so on, but we also have to be prepared in terms of knowing what’s going on in the field. We need to know the environmental conditions and diving conditions, even when we can’t dive. Increasingly, the work involves heading into deeper areas that can only be reached by robots, by remotely operated equipment. So we have to be able to adapt.

We have to be very precise and very organized because sometimes we have only a few minutes to access a site that will tell us many historical secrets. So we have to come very well prepared.

And when we dive, we’re working in a foreign environment. We have to be good divers, yes, but we also have to have access to tools that will give us access to information. We have to take into account currents, darkness, and so on. The work is really very challenging. But with the rapid development of new technologies in recent years, we have access to more and more tools. We do basically the same work as archaeologists on land. However, the work is done in a completely different environment.

JO: A bit hostile in fact.

MAB: A bit hostile, but with sites, objects and information that are not accessible elsewhere. So there’s an opportunity to learn about history in a different way, and in some cases on a much larger scale.

JO: With all the maritime traffic in Canada, there must have been many accidents. Can you talk about them and give us an idea of the number?

MAB: People don’t realize that we’re a maritime country. We are a country that has evolved and developed around water. This was true even before the Europeans arrived. The First Nations often travelled by water. That travel increased or developed differently, if you will, when the Europeans arrived.

The St. Lawrence River, for example, and the Atlantic provinces were the point of entry and the route. We refer to different waterways, such as the Ottawa and Richelieu rivers. They constituted the route. So, there was heavy traffic, which meant many accidents. We’re talking about probably tens of thousands of shipwrecks if we include the Great Lakes and all the coasts of Canada. Since Canada has the longest coastline in the world, there is potential for shipwrecks. Only a small number of those shipwrecks have been found, but some are very significant and extremely impressive as well.

JO: Are there also many military ships, or is it more…?

MAB: That’s another thing that people don’t realize. There have been many military confrontations in Canadian waters, dating back to the New France era, or when Phips (Sir William Phips) arrived at Quebec City in 1690 and laid siege to the city. He arrived by ship and lost ships when he returned. During the Conquest, there were naval confrontations in Louisbourg, Nova Scotia; in Chaleur Bay; and even at Quebec City. Then, in the War of 1812, the Great Lakes were an extremely important maritime theatre of war in terms of naval battles. There are a number of examples in the Richelieu River.

Then we have the Second World War, with ships and German submarines. We all know the stories of the submarines that came inside the Gulf. So there are many military shipwrecks, from the New France era onward.

JO: What were the most significant shipwrecks in Canada? Have all the shipwrecks been found or…?

MAB: No. There are still shipwrecks that remain to be found. These days at Parks Canada, we’ve been looking for two of the shipwrecks that are considered among the most significant in the country: the HMS Erebus and the HMS Terror, Sir John Franklin’s ships lost in the Arctic. Franklin left England in 1845 to find the Northwest Passage, and he was never heard from again. Those are examples of significant shipwrecks that haven’t been found.

However, significance is always relative. A shipwreck may be very significant, especially if there is loss of life. It’s a tragic event that is deeply affecting. There are many shipwrecks that may not be seen as having national historic significance. However, at the local level, they are tragic stories that have very deep significance and that have profoundly affected an area.

That being said, there are ships that bear witness to memorable moments in the history of our country. Among the national historic sites of shipwrecks are, if we go back, the oldest shipwrecks: the Basque wrecks at Red Bay, Labrador, where whales were hunted in the 16th century. It’s even a UNESCO world heritage site. Then, from the New France era, there’s the Corossol from 1693 and the Phips wrecks from 1690. These are very significant shipwrecks.

Also of great significance are the Louisbourg shipwrecks, the battle site, the Battle of the Restigouche historic site, as well as shipwrecks such as the Hamilton and Scourge from the War of 1812. For all practical purposes, those shipwrecks are intact at the bottom of Lake Ontario. And the Franklin shipwrecks-even if they still haven’t been found-have been declared of national historic significance.

So there’s a wide range of shipwrecks that are significant, but there are thousands and thousands of shipwrecks that have significance. A shipwreck may also be of recreational significance. Some shipwrecks may be a little less historically significant, but for divers, they are exceptional sites for appreciating history and for having direct contact with history. That significance matters.

JO: Yes, they have a bit of a magical side.

MAB: They have a very magical side. When we dive shipwrecks, we travel through history. They give us direct access to our past.

JO: Yes. I imagine that finding a shipwreck is a bit like finding a needle in a haystack?

MAB: It can sometimes be a needle in a haystack, but often it’s by chance. Divers will sometimes stumble upon remains, and it leads to the discovery of a shipwreck. But usually, when we’re looking for a shipwreck, we have to start at the beginning and go to the source. We have to begin with the archives. We have to start by doing research, trying to find every small clue because searching in water over a large area is very difficult and complicated. We face logistical and environmental obstacles in our working conditions. It’s also expensive. We need to use ships and small boats.

There are different ways to find shipwrecks. At one extreme is a method that is technologically very simple. We dive and systematically search an area, if it’s not too deep. At the other extreme, we use the most sophisticated equipment. Today we have what we call robotic research vehicles. It is as sophisticated as launching the device, which is a bit like a self-guided torpedo. We launch it and recover it a few hours later. It carries out a sonar sweep of the bottom along a pre-programmed path. Between the two, we have a range of methods.

Basically, we have to properly define the boundaries of the area. It’s detective work. We have to try to recreate the events and define our search area, then use the proper equipment. The side-scan sonar gives us an image, and magnetometers detect metal. We have to decide which of the tools we’ll use. If we don’t do the research beforehand, we’ll lose a great deal of time.

JO: Have you used the LAC collections in your research, and what types of documents have you found?

MAB: Yes, as often as possible. We try to use the off-site archives, but it’s important to have access to the sources. Our research always starts with the archives. As for the types of documents, I mentioned the Basque documents that were collected through Library and Archives Canada. I’ve personally used colonial archives a lot. For the Corossol sinking in 1693, I remember looking at documents and correspondence that talked about the French recovery from the shipwreck the year after 1693, and the entire Phips epic.

At LAC, there’s a copy of the paintings of Creswell [Samuel Gurney Cresswell], who was an illustrator, painter, and also a lieutenant, in charge of doing illustrations during the HMS Investigator’s journey through the Arctic. So there’s a wide variety of documents, and sometimes we are surprised by the personal correspondence, which gives us details that official documents can’t provide.

JO: How do these documents help you in your research?

MAB: The archival records are always surprising. They help us in every respect. You have to see archaeology as detective work. Every detail is significant. It can be the change in topographical names on old maps that refer to events. There are many “Wreck Points” or “Pointe à la barque,” “Anse à la barque,” and so on. They refer to events. People named places after events. So we can always be surprised by bits of information that seem trivial at first.

It ranges from information on the sites and on the events that led to a shipwreck, to what happened after the sinking and what happened overall. What we want is not only to understand an event, but also to understand the event in the larger context of history, such as the history of navigation. Sometimes, the records provide that broader information.

It ranges from the research information to the analysis afterward: what we have, what we found, what it means and what it says about our history. That’s where the records offer limitless possibilities. We always have surprises. That’s why we enjoy coming to the archives, because we never know what we’ll discover.

JO: Yes, it’s always great to open a box.

MAB: It’s like Christmas. It’s like Christmas when we start delving into archival records, and it’s a sort of prelude to what happens in archaeology. When we reach a site, we’re always excited by what the site has to offer. But we have to be prepared to understand it. That’s why preparation using archives is extremely important to our work.

JO: In terms of LAC sources, do you often look at historical maps? Do you look at the different ones, because we have quite a large collection…

MAB: Quite exceptional, yes.

JO: … from the beginning until now?

MAB: Yes. They provide a lot of information, and we use them, like all sources, as much as possible. We look for different things on the maps. Obviously, we look for places that may show shipwreck locations. These maps may also show the navigation corridors or charts. The old charts show anchorages and routes. They help us recreate navigation habits, which helps us understand the navigation and maritime mindset of the era and gives us clues as to where the ships went and where they were lost.

These maps give us that type of information. They also give us information on the topography and the names of places that have changed over the years. Take the example of the Corossol in the Sept-Îles bay. One of the islands in that bay is called Corossol. For years, people looked for the French ship, the Corossol, near that island. However, Manowin Island was also called Corossol at that time and its name changed. So in the old maps, we traced the origin, and the ship lies much closer to that island. Those are some of the clues.

We also have magnificent maps. One in particular comes to mind. It was created in the 19th century on the Îles-de-la-Madeleine by an insurance company agent who made a wreck map of all the shipwrecks that he knew of. To us, that’s like candy. It’s one of the opportunities that maps provide. Maps are magnificent even if we don’t find clues. Just to admire them-they’re absolutely magnificent.

JO: From a historical point of view, why is it important to study shipwrecks?

MAB: Shipwrecks are in fact a microcosm. They represent a small world. During the time of the voyage, there was a world of its own inside the ship. That in itself is interesting. How did people live on board? What were they carrying? These are clues. The advantage of a shipwreck is that it’s like a Polaroid, a fixed image of a specific point in time. When we study a city such as Quebec City that has been continuously occupied, sometimes it’s difficult to see the separation between eras, or even between events. A shipwreck shows a specific time and specific place.

JO: And it’s frozen in time.

MAB: And it’s frozen in time. So here’s an image, in 1740, what did we have? Of course, we find objects made in other eras that were still in use in that time period. But it really gives us a fixed image, a capsule. We often have an image of a time capsule. It’s very useful, because it’s very rare to have these mini Pompeiis, and we have them underwater. It’s absolutely fascinating and interesting. It’s one of the contributions of underwater archaeology.

The other thing is that we don’t necessarily find the same type of material underwater as on land. The preservation conditions are completely different. On land, we find a great deal of metal. Iron stays fairly well preserved. But there’s not much organic material, unless the environment is extremely humid or extremely dry. Underwater, organic materials are very well preserved, especially if the sedimentation is fairly quick. I remember finding cartouches from 1690 that still had paper around them. So the preservation conditions are absolutely exceptional.

That’s why it’s important. The shipwrecks give us unique information that complements what we find on land, but they also offer something that can’t be found elsewhere.

JO: I imagine that there are preservation problems once it’s…

MAB: And that’s the other challenge.

JO: Yes, certainly.

MAB: If an object is brought up, we have to be ready to take action because it starts to degrade the moment we move it…

JO: It comes into contact with oxygen.

MAB: … Yes, but even when we move it, we expose it to a new corrosion, a new degradation. If we bring it to the surface right away, the process accelerates very quickly. We have to keep the object damp. We always have to be ready to take action. For example, if the water heats up too fast, micro-organisms may develop that accelerate the degradation. We then have to be ready to start preservation treatments, which can take years depending on the object. It’s an enormous responsibility and we have to be ready to handle it, if not, we destroy…

JO: … the heritage.

MAB: … what we are trying to save, and that’s to everyone’s detriment.

JO: Why do you think that people are so fascinated by archaeology, and more specifically by shipwrecks?

MAB: That’s also a paradox. We say that people aren’t interested in history. I am firmly convinced that people enjoy history and are interested in it. It must be well narrated, but people are interested in history. There’s already an interest in our past and in our links with the past. If people feel directly affected by the past, they’ll be fascinated by it. If we add on top of that the element of discovery, and archaeology is discovery, and all the myths surrounding artefact hunters…

JO: … treasure hunters.

MAB: … treasures, and so on. It’s an image that people have. Yes, we hunt treasure, but historical treasure. That image applies even more strongly to shipwrecks. There’s always that myth of the Spanish galleon filled with gold. Everyone thinks that all shipwrecks contain a treasure. That being said, there’s a fascination with discovery and with the past, and add on top of that the notion of the bottom of the sea: it’s the final frontier, where we can be surprised by what we discover. Since these discoveries are often remarkably well preserved, people are absolutely fascinated.

We grow up with stories of pirates, shipwrecks and lost ships. These are powerful images. A shipwreck is an image that captures the imagination. But a shipwreck, when we dive a shipwreck, we have direct contact with the past. People are fascinated by that.

JO: Are shipwreck sites accessible to divers?

MAB: Shipwreck sites are very accessible to divers. For us, it’s a basic principle. We want people to be able to visit these sites. Very rarely do we limit access to a site. We do, for example, in Louisbourg, Nova Scotia. The site is accessible, but with a guide. The site must be visited with a guide because the wrecks are unique and very fragile.

However, the basic principle is that, as I was saying, we should try to allow people to savour and absorb the spirit of the site. The best way is to visit the site. So there are sites that are accessible, and we try to make them accessible. We not only make them accessible, but we also promote them. We’re developing tools to provide information to people.

It’s also important to raise awareness. We have the opportunity and privilege to visit the sites. We have to ensure that our children and grandchildren have the same opportunity. So we have to protect and respect [the sites]. In that spirit, the sites have to be accessible because these experiences are absolutely incredible. With technology, we can now make them accessible not only to divers but also virtually, which is interesting and stimulating. Nowadays there are opportunities to make all these wonders available to as many people as possible, even if they don’t have the chance to dive.

JO: How long has Parks Canada been involved in underwater archaeology?

MAB: 2014 marks the 50th anniversary of the first dives at Fort Lennox in 1964 by Sean Gilmore and Walter Zacharchuk. That’s where it began. We’re going back there in August of this year, to the birthplace of underwater archaeology at Parks Canada.

We’re one of the oldest teams in the world, if we can say that. The first time an archaeologist dived a site was in 1960, so we were there basically at the beginning. Parks Canada joined the adventure very early on and it continues to be a part of it to this day. I believe that we’ve studied 225 sites across Canada, in the three oceans, the Great Lakes, rivers, truly across the entire country. We have a wealth of experience, and we’ll celebrate that this year by returning to Fort Lennox where it all began.

JO: Congratulations!

MAB: Thank you very much.

JO: 2014 marks the 100th anniversary of the sinking of the Empress of Ireland. What can you tell us about this maritime accident?

MAB: The story of the Empress begins on May 28, 1914. The Empress of Ireland left Quebec City for England with first, second and third class passengers on board. The Empress left Quebec in the late afternoon, with more than 1,400 passengers and crew on board. The ship headed down the St. Lawrence to Pointe au Père, a pilot station, because pilots were needed to navigate the St. Lawrence, given the reefs and hazards.

The pilot left the Empress at the Pointe au Père pilot station, and the ship resumed her journey. At the same time, the Storstad, a cargo ship, was heading in the opposite direction. In the fog, the two ships collided. The Storstad rammed the Empress of Ireland, creating a hole that immediately filled with water.

At that moment, it was after 1:30 a.m., so almost 2:00 a.m. It was night and foggy. The ship sank within 14 minutes, with a loss of 1,012 lives. Over 400 people survived, but over 1,000 people [died]. Many survivors were pulled from the water either by the ship that collided with the Empress or by other ships that were immediately dispatched.

JO: 14 minutes…

MAB: … In 14 minutes, the ship sank. The water rushed in and the ship sank extremely fast, leaving very little opportunity for people, especially those deeper inside the ship, to save themselves.

JO: So a disaster.

MAB: The greatest maritime tragedy in the history of the country.

JO: What’s your most unforgettable experience at an underwater archaeology site?

MAB :I’ve been doing this job for 24 years now, and I can tell you that I have had extraordinary experiences! There are two that stand out.

One was a Second World War plane in Longue-Pointe-de-Mingan that sank after takeoff. Five of the nine crew members drowned in the plane. In 2009, the plane was found intact at a depth of 40 metres. We knew that five of the crew members were still inside. What was absolutely fascinating, apart from the sense of contact and the very touching story, was that we had the opportunity, chance and privilege to have people who were on the beach when the event occurred, who saw the accident and who saw the soldiers board right beforehand. They told us how it happened and they are a direct link. They are part of the history and they experienced that history.

That was an absolutely incredible human experience. We worked with the American forces to recover the remains of the soldiers. Seeing people who had witnessed the event and who could participate 70 years later was a very powerful moment. Diving the wreck of that plane was truly a journey through time.

The other experience was with the HMS Investigator in the Arctic. That’s the ship that was credited with discovering the Northwest Passage. Actually, the crew found it, since the ship remained trapped in the ice and the crew continued on foot and were saved by another ship. The ship is practically intact up to the upper deck in ten metres of water. When you go down there, the area is completely isolated. The crew spent two winters there. On land we can see the remains of the equipment that they left on the ground. Three graves are also visible. So we can absorb the fact that they were in this environment, which was completely hostile, for two years, with the hope of being rescued.

And the ship: we then dive this amazing exploration machine that’s still upright, with its iron-clad prow to break the ice. It’s an icebreaker from the 1850s. We dive on the deck, with the debris left by the ice, the pieces of the ship completely sheared off by the ice. But underneath that is a complete ship, and on the inside, everything that the people left on board.

I often say that it’s like a time travel machine. We are transported and we can absorb the spirit of the site. That’s what I believe is important, and what we at Parks [Canada] try to impart, the spirit of the site. There was a historic moment, but it occurred at a site. That site must be seen and experienced for maximum appreciation. That’s part of the essence of the historic event and the site. On that site, we truly felt it.

JO: Thank you very much for coming to speak with us today. We greatly appreciate your knowledge of underwater Canada. Thank you.

MAB: Thank you very much.

JO: To learn more about shipwrecks, visit our website Shipwreck Investigations at lac-bac.gc.ca/sos/shipwrecks or read our articles on shipwrecks on thediscoverblog.com [I found other subjects but not shipwrecks in my admittedly brief search of the blog].

Thank you for joining us. I’m your host, Jessica Ouvrard, and you’ve been listening to “Discover Library and Archives Canada-where Canadian history, literature and culture await you.” A special thanks to our guest today, Marc-André Bernier.

A couple of comments. (1) It seems that neither Mr. Bernier nor his team have ever dived on the West Coast or west of Ottawa for that matter. (2) Given Bernier’s comments about oxygen and the degradation of artefacts once exposed to the air, I imagine there’s a fair of amount of excitement and interest in Corr’s work on ‘smart nanotech’ for shipwrecks.

Being smart about using artificial intelligence in the field of medicine

Since my August 20, 2018 post featured an opinion piece about the possibly imminent replacement of radiologists with artificial intelligence systems and the latest research about employing them for diagnosing eye diseases, it seems like a good time to examine some of the mythology embedded in the discussion about AI and medicine.

Imperfections in medical AI systems

An August 15, 2018 article for Slate.com by W. Nicholson Price II (who teaches at the University of Michigan School of Law; in addition to his law degree he has a PhD in Biological Sciences from Columbia University) begins with the peppy, optimistic view before veering into more critical territory (Note: Links have been removed),

For millions of people suffering from diabetes, new technology enabled by artificial intelligence promises to make management much easier. Medtronic’s Guardian Connect system promises to alert users 10 to 60 minutes before they hit high or low blood sugar level thresholds, thanks to IBM Watson, “the same supercomputer technology that can predict global weather patterns.” Startup Beta Bionics goes even further: In May, it received Food and Drug Administration approval to start clinical trials on what it calls a “bionic pancreas system” powered by artificial intelligence, capable of “automatically and autonomously managing blood sugar levels 24/7.”

An artificial pancreas powered by artificial intelligence represents a huge step forward for the treatment of diabetes—but getting it right will be hard. Artificial intelligence (also known in various iterations as deep learning and machine learning) promises to automatically learn from patterns in medical data to help us do everything from managing diabetes to finding tumors in an MRI to predicting how long patients will live. But the artificial intelligence techniques involved are typically opaque. We often don’t know how the algorithm makes the eventual decision. And they may change and learn from new data—indeed, that’s a big part of the promise. But when the technology is complicated, opaque, changing, and absolutely vital to the health of a patient, how do we make sure it works as promised?

Price describes how a ‘closed loop’ artificial pancreas with AI would automate insulin levels for diabetic patients, flaws in the automated system, and how companies like to maintain a competitive advantage (Note: Links have been removed),

[…] a “closed loop” artificial pancreas, where software handles the whole issue, receiving and interpreting signals from the monitor, deciding when and how much insulin is needed, and directing the insulin pump to provide the right amount. The first closed-loop system was approved in late 2016. The system should take as much of the issue off the mind of the patient as possible (though, of course, that has limits). Running a close-loop artificial pancreas is challenging. The way people respond to changing levels of carbohydrates is complicated, as is their response to insulin; it’s hard to model accurately. Making it even more complicated, each individual’s body reacts a little differently.

Here’s where artificial intelligence comes into play. Rather than trying explicitly to figure out the exact model for how bodies react to insulin and to carbohydrates, machine learning methods, given a lot of data, can find patterns and make predictions. And existing continuous glucose monitors (and insulin pumps) are excellent at generating a lot of data. The idea is to train artificial intelligence algorithms on vast amounts of data from diabetic patients, and to use the resulting trained algorithms to run a closed-loop artificial pancreas. Even more exciting, because the system will keep measuring blood glucose, it can learn from the new data and each patient’s artificial pancreas can customize itself over time as it acquires new data from that patient’s particular reactions.

Here’s the tough question: How will we know how well the system works? Diabetes software doesn’t exactly have the best track record when it comes to accuracy. A 2015 study found that among smartphone apps for calculating insulin doses, two-thirds of the apps risked giving incorrect results, often substantially so. … And companies like to keep their algorithms proprietary for a competitive advantage, which makes it hard to know how they work and what flaws might have gone unnoticed in the development process.

There’s more,

These issues aren’t unique to diabetes care—other A.I. algorithms will also be complicated, opaque, and maybe kept secret by their developers. The potential for problems multiplies when an algorithm is learning from data from an entire hospital, or hospital system, or the collected data from an entire state or nation, not just a single patient. …

The [US Food and Drug Administraiont] FDA is working on this problem. The head of the agency has expressed his enthusiasm for bringing A.I. safely into medical practice, and the agency has a new Digital Health Innovation Action Plan to try to tackle some of these issues. But they’re not easy, and one thing making it harder is a general desire to keep the algorithmic sauce secret. The example of IBM Watson for Oncology has given the field a bit of a recent black eye—it turns out that the company knew the algorithm gave poor recommendations for cancer treatment but kept that secret for more than a year. …

While Price focuses on problems with algorithms and with developers and their business interests, he also hints at some of the body’s complexities.

Can AI systems be like people?

Susan Baxter, a medical writer with over 20 years experience, a PhD in health economics, and author of countless magazine articles and several books, offers a more person-centered approach to the discussion in her July 6, 2018 posting on susanbaxter.com,

The fascination with AI continues to irk, given that every second thing I read seems to be extolling the magic of AI and medicine and how It Will Change Everything. Which it will not, trust me. The essential issue of illness remains perennial and revolves around an individual for whom no amount of technology will solve anything without human contact. …

But in this world, or so we are told by AI proponents, radiologists will soon be obsolete. [my August 20, 2018 post] The adaptational learning capacities of AI mean that reading a scan or x-ray will soon be more ably done by machines than humans. The presupposition here is that we, the original programmers of this artificial intelligence, understand the vagaries of real life (and real disease) so wonderfully that we can deconstruct these much as we do the game of chess (where, let’s face it, Big Blue ate our lunch) and that analyzing a two-dimensional image of a three-dimensional body, already problematic, can be reduced to a series of algorithms.

Attempting to extrapolate what some “shadow” on a scan might mean in a flesh and blood human isn’t really quite the same as bishop to knight seven. Never mind the false positive/negatives that are considered an acceptable risk or the very real human misery they create.

Moravec called it

It’s called Moravec’s paradox, the inability of humans to realize just how complex basic physical tasks are – and the corresponding inability of AI to mimic it. As you walk across the room, carrying a glass of water, talking to your spouse/friend/cat/child; place the glass on the counter and open the dishwasher door with your foot as you open a jar of pickles at the same time, take a moment to consider just how many concurrent tasks you are doing and just how enormous the computational power these ostensibly simple moves would require.

Researchers in Singapore taught industrial robots to assemble an Ikea chair. Essentially, screw in the legs. A person could probably do this in a minute. Maybe two. The preprogrammed robots took nearly half an hour. And I suspect programming those robots took considerably longer than that.

Ironically, even Elon Musk, who has had major production problems with the Tesla cars rolling out of his high tech factory, has conceded (in a tweet) that “Humans are underrated.”

I wouldn’t necessarily go that far given the political shenanigans of Trump & Co. but in the grand scheme of things I tend to agree. …

Is AI going the way of gene therapy?

Susan draws a parallel between the AI and medicine discussion with the discussion about genetics and medicine (Note: Links have been removed),

On a somewhat similar note – given the extent to which genetics discourse has that same linear, mechanistic  tone [as AI and medicine] – it turns out all this fine talk of using genetics to determine health risk and whatnot is based on nothing more than clever marketing, since a lot of companies are making a lot of money off our belief in DNA. Truth is half the time we don’t even know what a gene is never mind what it actually does;  geneticists still can’t agree on how many genes there are in a human genome, as this article in Nature points out.

Along the same lines, I was most amused to read about something called the Super Seniors Study, research following a group of individuals in their 80’s, 90’s and 100’s who seem to be doing really well. Launched in 2002 and headed by Angela Brooks Wilson, a geneticist at the BC [British Columbia] Cancer Agency and SFU [Simon Fraser University] Chair of biomedical physiology and kinesiology, this longitudinal work is examining possible factors involved in healthy ageing.

Turns out genes had nothing to do with it, the title of the Globe and Mail article notwithstanding. (“Could the DNA of these super seniors hold the secret to healthy aging?” The answer, a resounding “no”, well hidden at the very [end], the part most people wouldn’t even get to.) All of these individuals who were racing about exercising and working part time and living the kind of life that makes one tired just reading about it all had the same “multiple (genetic) factors linked to a high probability of disease”. You know, the gene markers they tell us are “linked” to cancer, heart disease, etc., etc. But these super seniors had all those markers but none of the diseases, demonstrating (pretty strongly) that the so-called genetic links to disease are a load of bunkum. Which (she said modestly) I have been saying for more years than I care to remember. You’re welcome.

The fundamental error in this type of linear thinking is in allowing our metaphors (genes are the “blueprint” of life) and propensity towards social ideas of determinism to overtake common sense. Biological and physiological systems are not static; they respond to and change to life in its entirety, whether it’s diet and nutrition to toxic or traumatic insults. Immunity alters, endocrinology changes, – even how we think and feel affects the efficiency and effectiveness of physiology. Which explains why as we age we become increasingly dissimilar.

If you have the time, I encourage to read Susan’s comments in their entirety.

Scientific certainties

Following on with genetics, gene therapy dreams, and the complexity of biology, the June 19, 2018 Nature article by Cassandra Willyard (mentioned in Susan’s posting) highlights an aspect of scientific research not often mentioned in public,

One of the earliest attempts to estimate the number of genes in the human genome involved tipsy geneticists, a bar in Cold Spring Harbor, New York, and pure guesswork.

That was in 2000, when a draft human genome sequence was still in the works; geneticists were running a sweepstake on how many genes humans have, and wagers ranged from tens of thousands to hundreds of thousands. Almost two decades later, scientists armed with real data still can’t agree on the number — a knowledge gap that they say hampers efforts to spot disease-related mutations.

In 2000, with the genomics community abuzz over the question of how many human genes would be found, Ewan Birney launched the GeneSweep contest. Birney, now co-director of the European Bioinformatics Institute (EBI) in Hinxton, UK, took the first bets at a bar during an annual genetics meeting, and the contest eventually attracted more than 1,000 entries and a US$3,000 jackpot. Bets on the number of genes ranged from more than 312,000 to just under 26,000, with an average of around 40,000. These days, the span of estimates has shrunk — with most now between 19,000 and 22,000 — but there is still disagreement (See ‘Gene Tally’).

… the inconsistencies in the number of genes from database to database are problematic for researchers, Pruitt says. “People want one answer,” she [Kim Pruitt, a genome researcher at the US National Center for Biotechnology Information {NCB}] in Bethesda, Maryland] adds, “but biology is complex.”

I wanted to note that scientists do make guesses and not just with genetics. For example, Gina Mallet’s 2005 book ‘Last Chance to Eat: The Fate of Taste in a Fast Food World’ recounts the story of how good and bad levels of cholesterol were established—the experts made some guesses based on their experience. That said, Willyard’s article details the continuing effort to nail down the number of genes almost 20 years after the human genome project was completed and delves into the problems the scientists have uncovered.

Final comments

In addition to opaque processes with developers/entrepreneurs wanting to maintain their secrets for competitive advantages and in addition to our own poor understanding of the human body (how many genes are there anyway?), there are same major gaps (reflected in AI) in our understanding of various diseases. Angela Lashbrook’s August 16, 2018 article for The Atlantic highlights some issues with skin cancer and shade of your skin (Note: Links have been removed),

… While fair-skinned people are at the highest risk for contracting skin cancer, the mortality rate for African Americans is considerably higher: Their five-year survival rate is 73 percent, compared with 90 percent for white Americans, according to the American Academy of Dermatology.

As the rates of melanoma for all Americans continue a 30-year climb, dermatologists have begun exploring new technologies to try to reverse this deadly trend—including artificial intelligence. There’s been a growing hope in the field that using machine-learning algorithms to diagnose skin cancers and other skin issues could make for more efficient doctor visits and increased, reliable diagnoses. The earliest results are promising—but also potentially dangerous for darker-skinned patients.

… Avery Smith, … a software engineer in Baltimore, Maryland, co-authored a paper in JAMA [Journal of the American Medical Association] Dermatology that warns of the potential racial disparities that could come from relying on machine learning for skin-cancer screenings. Smith’s co-author, Adewole Adamson of the University of Texas at Austin, has conducted multiple studies on demographic imbalances in dermatology. “African Americans have the highest mortality rate [for skin cancer], and doctors aren’t trained on that particular skin type,” Smith told me over the phone. “When I came across the machine-learning software, one of the first things I thought was how it will perform on black people.”

Recently, a study that tested machine-learning software in dermatology, conducted by a group of researchers primarily out of Germany, found that “deep-learning convolutional neural networks,” or CNN, detected potentially cancerous skin lesions better than the 58 dermatologists included in the study group. The data used for the study come from the International Skin Imaging Collaboration, or ISIC, an open-source repository of skin images to be used by machine-learning algorithms. Given the rise in melanoma cases in the United States, a machine-learning algorithm that assists dermatologists in diagnosing skin cancer earlier could conceivably save thousands of lives each year.

… Chief among the prohibitive issues, according to Smith and Adamson, is that the data the CNN relies on come from primarily fair-skinned populations in the United States, Australia, and Europe. If the algorithm is basing most of its knowledge on how skin lesions appear on fair skin, then theoretically, lesions on patients of color are less likely to be diagnosed. “If you don’t teach the algorithm with a diverse set of images, then that algorithm won’t work out in the public that is diverse,” says Adamson. “So there’s risk, then, for people with skin of color to fall through the cracks.”

As Adamson and Smith’s paper points out, racial disparities in artificial intelligence and machine learning are not a new issue. Algorithms have mistaken images of black people for gorillas, misunderstood Asians to be blinking when they weren’t, and “judged” only white people to be attractive. An even more dangerous issue, according to the paper, is that decades of clinical research have focused primarily on people with light skin, leaving out marginalized communities whose symptoms may present differently.

The reasons for this exclusion are complex. According to Andrew Alexis, a dermatologist at Mount Sinai, in New York City, and the director of the Skin of Color Center, compounding factors include a lack of medical professionals from marginalized communities, inadequate information about those communities, and socioeconomic barriers to participating in research. “In the absence of a diverse study population that reflects that of the U.S. population, potential safety or efficacy considerations could be missed,” he says.

Adamson agrees, elaborating that with inadequate data, machine learning could misdiagnose people of color with nonexistent skin cancers—or miss them entirely. But he understands why the field of dermatology would surge ahead without demographically complete data. “Part of the problem is that people are in such a rush. This happens with any new tech, whether it’s a new drug or test. Folks see how it can be useful and they go full steam ahead without thinking of potential clinical consequences. …

Improving machine-learning algorithms is far from the only method to ensure that people with darker skin tones are protected against the sun and receive diagnoses earlier, when many cancers are more survivable. According to the Skin Cancer Foundation, 63 percent of African Americans don’t wear sunscreen; both they and many dermatologists are more likely to delay diagnosis and treatment because of the belief that dark skin is adequate protection from the sun’s harmful rays. And due to racial disparities in access to health care in America, African Americans are less likely to get treatment in time.

Happy endings

I’ll add one thing to Price’s article, Susan’s posting, and Lashbrook’s article about the issues with AI , certainty, gene therapy, and medicine—the desire for a happy ending prefaced with an easy solution. If the easy solution isn’t possible accommodations will be made but that happy ending is a must. All disease will disappear and there will be peace on earth. (Nod to Susan Baxter and her many discussions with me about disease processes and happy endings.)

The solutions, for the most part, are seen as technological despite the mountain of evidence suggesting that technology reflects our own imperfect understanding of health and disease therefore providing what is at best an imperfect solution.

Also, we tend to underestimate just how complex humans are not only in terms of disease and health but also with regard to our skills, understanding, and, perhaps not often enough, our ability to respond appropriately in the moment.

There is much to celebrate in what has been accomplished: no more black death, no more smallpox, hip replacements, pacemakers, organ transplants, and much more. Yes, we should try to improve our medicine. But, maybe alongside the celebration we can welcome AI and other technologies with a lot less hype and a lot more skepticism.

Robot radiologists (artificially intelligent doctors)

Mutaz Musa, a physician at New York Presbyterian Hospital/Weill Cornell (Department of Emergency Medicine) and software developer in New York City, has penned an eyeopening opinion piece about artificial intelligence (or robots if you prefer) and the field of radiology. From a June 25, 2018 opinion piece for The Scientist (Note: Links have been removed),

Although artificial intelligence has raised fears of job loss for many, we doctors have thus far enjoyed a smug sense of security. There are signs, however, that the first wave of AI-driven redundancies among doctors is fast approaching. And radiologists seem to be first on the chopping block.

Andrew Ng, founder of online learning platform Coursera and former CTO of “China’s Google,” Baidu, recently announced the development of CheXNet, a convolutional neural net capable of recognizing pneumonia and other thoracic pathologies on chest X-rays better than human radiologists. Earlier this year, a Hungarian group developed a similar system for detecting and classifying features of breast cancer in mammograms. In 2017, Adelaide University researchers published details of a bot capable of matching human radiologist performance in detecting hip fractures. And, of course, Google achieved superhuman proficiency in detecting diabetic retinopathy in fundus photographs, a task outside the scope of most radiologists.

Beyond single, two-dimensional radiographs, a team at Oxford University developed a system for detecting spinal disease from MRI data with a performance equivalent to a human radiologist. Meanwhile, researchers at the University of California, Los Angeles, reported detecting pathology on head CT scans with an error rate more than 20 times lower than a human radiologist.

Although these particular projects are still in the research phase and far from perfect—for instance, often pitting their machines against a limited number of radiologists—the pace of progress alone is telling.

Others have already taken their algorithms out of the lab and into the marketplace. Enlitic, founded by Aussie serial entrepreneur and University of San Francisco researcher Jeremy Howard, is a Bay-Area startup that offers automated X-ray and chest CAT scan interpretation services. Enlitic’s systems putatively can judge the malignancy of nodules up to 50 percent more accurately than a panel of radiologists and identify fractures so small they’d typically be missed by the human eye. One of Enlitic’s largest investors, Capitol Health, owns a network of diagnostic imaging centers throughout Australia, anticipating the broad rollout of this technology. Another Bay-Area startup, Arterys, offers cloud-based medical imaging diagnostics. Arterys’s services extend beyond plain films to cardiac MRIs and CAT scans of the chest and abdomen. And there are many others.

Musa has offered a compelling argument with lots of links to supporting evidence.

[downloaded from https://www.the-scientist.com/news-opinion/opinion–rise-of-the-robot-radiologists-64356]

And evidence keeps mounting, I just stumbled across this June 30, 2018 news item on Xinhuanet.com,

An artificial intelligence (AI) system scored 2:0 against elite human physicians Saturday in two rounds of competitions in diagnosing brain tumors and predicting hematoma expansion in Beijing.

The BioMind AI system, developed by the Artificial Intelligence Research Centre for Neurological Disorders at the Beijing Tiantan Hospital and a research team from the Capital Medical University, made correct diagnoses in 87 percent of 225 cases in about 15 minutes, while a team of 15 senior doctors only achieved 66-percent accuracy.

The AI also gave correct predictions in 83 percent of brain hematoma expansion cases, outperforming the 63-percent accuracy among a group of physicians from renowned hospitals across the country.

The outcomes for human physicians were quite normal and even better than the average accuracy in ordinary hospitals, said Gao Peiyi, head of the radiology department at Tiantan Hospital, a leading institution on neurology and neurosurgery.

To train the AI, developers fed it tens of thousands of images of nervous system-related diseases that the Tiantan Hospital has archived over the past 10 years, making it capable of diagnosing common neurological diseases such as meningioma and glioma with an accuracy rate of over 90 percent, comparable to that of a senior doctor.

All the cases were real and contributed by the hospital, but never used as training material for the AI, according to the organizer.

Wang Yongjun, executive vice president of the Tiantan Hospital, said that he personally did not care very much about who won, because the contest was never intended to pit humans against technology but to help doctors learn and improve [emphasis mine] through interactions with technology.

“I hope through this competition, doctors can experience the power of artificial intelligence. This is especially so for some doctors who are skeptical about artificial intelligence. I hope they can further understand AI and eliminate their fears toward it,” said Wang.

Dr. Lin Yi who participated and lost in the second round, said that she welcomes AI, as it is not a threat but a “friend.” [emphasis mine]

AI will not only reduce the workload but also push doctors to keep learning and improve their skills, said Lin.

Bian Xiuwu, an academician with the Chinese Academy of Science and a member of the competition’s jury, said there has never been an absolute standard correct answer in diagnosing developing diseases, and the AI would only serve as an assistant to doctors in giving preliminary results. [emphasis mine]

Dr. Paul Parizel, former president of the European Society of Radiology and another member of the jury, also agreed that AI will not replace doctors, but will instead function similar to how GPS does for drivers. [emphasis mine]

Dr. Gauden Galea, representative of the World Health Organization in China, said AI is an exciting tool for healthcare but still in the primitive stages.

Based on the size of its population and the huge volume of accessible digital medical data, China has a unique advantage in developing medical AI, according to Galea.

China has introduced a series of plans in developing AI applications in recent years.

In 2017, the State Council issued a development plan on the new generation of Artificial Intelligence and the Ministry of Industry and Information Technology also issued the “Three-Year Action Plan for Promoting the Development of a New Generation of Artificial Intelligence (2018-2020).”

The Action Plan proposed developing medical image-assisted diagnostic systems to support medicine in various fields.

I note the reference to cars and global positioning systems (GPS) and their role as ‘helpers’;, it seems no one at the ‘AI and radiology’ competition has heard of driverless cars. Here’s Musa on those reassuring comments abut how the technology won’t replace experts but rather augment their skills,

To be sure, these services frame themselves as “support products” that “make doctors faster,” rather than replacements that make doctors redundant. This language may reflect a reserved view of the technology, though it likely also represents a marketing strategy keen to avoid threatening or antagonizing incumbents. After all, many of the customers themselves, for now, are radiologists.

Radiology isn’t the only area where experts might find themselves displaced.

Eye experts

It seems inroads have been made by artificial intelligence systems (AI) into the diagnosis of eye diseases. It got the ‘Fast Company’ treatment (exciting new tech, learn all about it) as can be seen further down in this posting. First, here’s a more restrained announcement, from an August 14, 2018 news item on phys.org (Note: A link has been removed),

An artificial intelligence (AI) system, which can recommend the correct referral decision for more than 50 eye diseases, as accurately as experts has been developed by Moorfields Eye Hospital NHS Foundation Trust, DeepMind Health and UCL [University College London].

The breakthrough research, published online by Nature Medicine, describes how machine-learning technology has been successfully trained on thousands of historic de-personalised eye scans to identify features of eye disease and recommend how patients should be referred for care.

Researchers hope the technology could one day transform the way professionals carry out eye tests, allowing them to spot conditions earlier and prioritise patients with the most serious eye diseases before irreversible damage sets in.

An August 13, 2018 UCL press release, which originated the news item, describes the research and the reasons behind it in more detail,

More than 285 million people worldwide live with some form of sight loss, including more than two million people in the UK. Eye diseases remain one of the biggest causes of sight loss, and many can be prevented with early detection and treatment.

Dr Pearse Keane, NIHR Clinician Scientist at the UCL Institute of Ophthalmology and consultant ophthalmologist at Moorfields Eye Hospital NHS Foundation Trust said: “The number of eye scans we’re performing is growing at a pace much faster than human experts are able to interpret them. There is a risk that this may cause delays in the diagnosis and treatment of sight-threatening diseases, which can be devastating for patients.”

“The AI technology we’re developing is designed to prioritise patients who need to be seen and treated urgently by a doctor or eye care professional. If we can diagnose and treat eye conditions early, it gives us the best chance of saving people’s sight. With further research it could lead to greater consistency and quality of care for patients with eye problems in the future.”

The study, launched in 2016, brought together leading NHS eye health professionals and scientists from UCL and the National Institute for Health Research (NIHR) with some of the UK’s top technologists at DeepMind to investigate whether AI technology could help improve the care of patients with sight-threatening diseases, such as age-related macular degeneration and diabetic eye disease.

Using two types of neural network – mathematical systems for identifying patterns in images or data – the AI system quickly learnt to identify 10 features of eye disease from highly complex optical coherence tomography (OCT) scans. The system was then able to recommend a referral decision based on the most urgent conditions detected.

To establish whether the AI system was making correct referrals, clinicians also viewed the same OCT scans and made their own referral decisions. The study concluded that AI was able to make the right referral recommendation more than 94% of the time, matching the performance of expert clinicians.

The AI has been developed with two unique features which maximise its potential use in eye care. Firstly, the system can provide information that helps explain to eye care professionals how it arrives at its recommendations. This information includes visuals of the features of eye disease it has identified on the OCT scan and the level of confidence the system has in its recommendations, in the form of a percentage. This functionality is crucial in helping clinicians scrutinise the technology’s recommendations and check its accuracy before deciding the type of care and treatment a patient receives.

Secondly, the AI system can be easily applied to different types of eye scanner, not just the specific model on which it was trained. This could significantly increase the number of people who benefit from this technology and future-proof it, so it can still be used even as OCT scanners are upgraded or replaced over time.

The next step is for the research to go through clinical trials to explore how this technology might improve patient care in practice, and regulatory approval before it can be used in hospitals and other clinical settings.

If clinical trials are successful in demonstrating that the technology can be used safely and effectively, Moorfields will be able to use an eventual, regulatory-approved product for free, across all 30 of their UK hospitals and community clinics, for an initial period of five years.

The work that has gone into this project will also help accelerate wider NHS research for many years to come. For example, DeepMind has invested significant resources to clean, curate and label Moorfields’ de-identified research dataset to create one of the most advanced eye research databases in the world.

Moorfields owns this database as a non-commercial public asset, which is already forming the basis of nine separate medical research studies. In addition, Moorfields can also use DeepMind’s trained AI model for future non-commercial research efforts, which could help advance medical research even further.

Mustafa Suleyman, Co-founder and Head of Applied AI at DeepMind Health, said: “We set up DeepMind Health because we believe artificial intelligence can help solve some of society’s biggest health challenges, like avoidable sight loss, which affects millions of people across the globe. These incredibly exciting results take us one step closer to that goal and could, in time, transform the diagnosis, treatment and management of patients with sight threatening eye conditions, not just at Moorfields, but around the world.”

Professor Sir Peng Tee Khaw, director of the NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust and UCL Institute of Ophthalmology said: “The results of this pioneering research with DeepMind are very exciting and demonstrate the potential sight-saving impact AI could have for patients. I am in no doubt that AI has a vital role to play in the future of healthcare, particularly when it comes to training and helping medical professionals so that patients benefit from vital treatment earlier than might previously have been possible. This shows the transformative research than can be carried out in the UK combining world leading industry and NIHR/NHS hospital/university partnerships.”

Matt Hancock, Health and Social Care Secretary, said: “This is hugely exciting and exactly the type of technology which will benefit the NHS in the long term and improve patient care – that’s why we fund over a billion pounds a year in health research as part of our long term plan for the NHS.”

Here’s a link to and a citation for the study,

Clinically applicable deep learning for diagnosis and referral in retinal disease by Jeffrey De Fauw, Joseph R. Ledsam, Bernardino Romera-Paredes, Stanislav Nikolov, Nenad Tomasev, Sam Blackwell, Harry Askham, Xavier Glorot, Brendan O’Donoghue, Daniel Visentin, George van den Driessche, Balaji Lakshminarayanan, Clemens Meyer, Faith Mackinder, Simon Bouton, Kareem Ayoub, Reena Chopra, Dominic King, Alan Karthikesalingam, Cían O. Hughes, Rosalind Raine, Julian Hughes, Dawn A. Sim, Catherine Egan, Adnan Tufail, Hugh Montgomery, Demis Hassabis, Geraint Rees, Trevor Back, Peng T. Khaw, Mustafa Suleyman, Julien Cornebise, Pearse A. Keane, & Olaf Ronneberger. Nature Medicine (2018) DOI: https://doi.org/10.1038/s41591-018-0107-6 Published 13 August 2018

This paper is behind a paywall.

And now, Melissa Locker’s August 15, 2018 article for Fast Company (Note: Links have been removed),

In a paper published in Nature Medicine on Monday, Google’s DeepMind subsidiary, UCL, and researchers at Moorfields Eye Hospital showed off their new AI system. The researchers used deep learning to create algorithm-driven software that can identify common patterns in data culled from dozens of common eye diseases from 3D scans. The result is an AI that can identify more than 50 diseases with incredible accuracy and can then refer patients to a specialist. Even more important, though, is that the AI can explain why a diagnosis was made, indicating which part of the scan prompted the outcome. It’s an important step in both medicine and in making AIs slightly more human

The editor or writer has even highlighted the sentence about the system’s accuracy—not just good but incredible!

I will be publishing something soon [my August 21, 2018 posting] which highlights some of the questions one might want to ask about AI and medicine before diving headfirst into this brave new world of medicine.