Tag Archives: Canada

Limitless energy and the International Thermonuclear Experimental Reactor (ITER)

Over 30 years in the dreaming, the International Thermonuclear Experimental Reactor (ITER) is now said to be 1/2 way to completing construction. A December 6, 2017 ITER press release (received via email) makes the joyful announcement,

WORLD’S MOST COMPLEX MACHINE IS 50 PERCENT COMPLETED
ITER is proving that fusion is the future source of clean, abundant, safe and economic energy_

The International Thermonuclear Experimental Reactor (ITER), a project to prove that fusion power can be produced on a commercial scale and is sustainable, is now 50 percent built to initial operation. Fusion is the same energy source from the Sun that gives the Earth its light and warmth.

ITER will use hydrogen fusion, controlled by superconducting magnets, to produce massive heat energy. In the commercial machines that will follow, this heat will drive turbines to produce electricity with these positive benefits:

* Fusion energy is carbon-free and environmentally sustainable, yet much more powerful than fossil fuels. A pineapple-sized amount of hydrogen offers as much fusion energy as 10,000 tons of coal.

* ITER uses two forms of hydrogen fuel: deuterium, which is easily extracted from seawater; and tritium, which is bred from lithium inside the fusion reactor. The supply of fusion fuel for industry and megacities is abundant, enough for millions of years.

* When the fusion reaction is disrupted, the reactor simply shuts down-safely and without external assistance. Tiny amounts of fuel are used, about 2-3 grams at a time; so there is no physical possibility of a meltdown accident.

* Building and operating a fusion power plant is targeted to be comparable to the cost of a fossil fuel or nuclear fission plant. But unlike today’s nuclear plants, a fusion plant will not have the costs of high-level radioactive waste disposal. And unlike fossil fuel plants,
fusion will not have the environmental cost of releasing CO2 and other pollutants.

ITER is the most complex science project in human history. The hydrogen plasma will be heated to 150 million degrees Celsius, ten times hotter than the core of the Sun, to enable the fusion reaction. The process happens in a donut-shaped reactor, called a tokamak(*), which is surrounded by giant magnets that confine and circulate the superheated, ionized plasma, away from the metal walls. The superconducting magnets must be cooled to minus 269°C, as cold as interstellar space.

The ITER facility is being built in Southern France by a scientific partnership of 35 countries. ITER’s specialized components, roughly 10 million parts in total, are being manufactured in industrial facilities all over the world. They are subsequently shipped to the ITER worksite, where they must be assembled, piece-by-piece, into the final machine.

Each of the seven ITER members-the European Union, China, India, Japan, Korea, Russia, and the United States-is fabricating a significant portion of the machine. This adds to ITER’s complexity.

In a message dispatched on December 1 [2017] to top-level officials in ITER member governments, the ITER project reported that it had completed 50 percent of the “total construction work scope through First Plasma” (**). First Plasma, scheduled for December 2025, will be the first stage of operation for ITER as a functional machine.

“The stakes are very high for ITER,” writes Bernard Bigot, Ph.D., Director-General of ITER. “When we prove that fusion is a viable energy source, it will eventually replace burning fossil fuels, which are non-renewable and non-sustainable. Fusion will be complementary with wind, solar, and other renewable energies.

“ITER’s success has demanded extraordinary project management, systems engineering, and almost perfect integration of our work.

“Our design has taken advantage of the best expertise of every member’s scientific and industrial base. No country could do this alone. We are all learning from each other, for the world’s mutual benefit.”

The ITER 50 percent milestone is getting significant attention.

“We are fortunate that ITER and fusion has had the support of world leaders, historically and currently,” says Director-General Bigot. “The concept of the ITER project was conceived at the 1985 Geneva Summit between Ronald Reagan and Mikhail Gorbachev. When the ITER Agreement was signed in 2006, it was strongly supported by leaders such as French President Jacques Chirac, U.S. President George W. Bush, and Indian Prime Minister Manmohan Singh.

“More recently, President Macron and U.S. President Donald Trump exchanged letters about ITER after their meeting this past July. One month earlier, President Xi Jinping of China hosted Russian President Vladimir Putin and other world leaders in a showcase featuring ITER and fusion power at the World EXPO in Astana, Kazakhstan.

“We know that other leaders have been similarly involved behind the scenes. It is clear that each ITER member understands the value and importance of this project.”

Why use this complex manufacturing arrangement?

More than 80 percent of the cost of ITER, about $22 billion or EUR18 billion, is contributed in the form of components manufactured by the partners. Many of these massive components of the ITER machine must be precisely fitted-for example, 17-meter-high magnets with less than a millimeter of tolerance. Each component must be ready on time to fit into the Master Schedule for machine assembly.

Members asked for this deal for three reasons. First, it means that most of the ITER costs paid by any member are actually paid to that member’s companies; the funding stays in-country. Second, the companies working on ITER build new industrial expertise in major fields-such as electromagnetics, cryogenics, robotics, and materials science. Third, this new expertise leads to innovation and spin-offs in other fields.

For example, expertise gained working on ITER’s superconducting magnets is now being used to map the human brain more precisely than ever before.

The European Union is paying 45 percent of the cost; China, India, Japan, Korea, Russia, and the United States each contribute 9 percent equally. All members share in ITER’s technology; they receive equal access to the intellectual property and innovation that comes from building ITER.

When will commercial fusion plants be ready?

ITER scientists predict that fusion plants will start to come on line as soon as 2040. The exact timing, according to fusion experts, will depend on the level of public urgency and political will that translates to financial investment.

How much power will they provide?

The ITER tokamak will produce 500 megawatts of thermal power. This size is suitable for studying a “burning” or largely self-heating plasma, a state of matter that has never been produced in a controlled environment on Earth. In a burning plasma, most of the plasma heating comes from the fusion reaction itself. Studying the fusion science and technology at ITER’s scale will enable optimization of the plants that follow.

A commercial fusion plant will be designed with a slightly larger plasma chamber, for 10-15 times more electrical power. A 2,000-megawatt fusion electricity plant, for example, would supply 2 million homes.

How much would a fusion plant cost and how many will be needed?

The initial capital cost of a 2,000-megawatt fusion plant will be in the range of $10 billion. These capital costs will be offset by extremely low operating costs, negligible fuel costs, and infrequent component replacement costs over the 60-year-plus life of the plant. Capital costs will decrease with large-scale deployment of fusion plants.

At current electricity usage rates, one fusion plant would be more than enough to power a city the size of Washington, D.C. The entire D.C. metropolitan area could be powered with four fusion plants, with zero carbon emissions.

“If fusion power becomes universal, the use of electricity could be expanded greatly, to reduce the greenhouse gas emissions from transportation, buildings and industry,” predicts Dr. Bigot. “Providing clean, abundant, safe, economic energy will be a miracle for our planet.”

*     *     *

FOOTNOTES:

* “Tokamak” is a word of Russian origin meaning a toroidal or donut-shaped magnetic chamber. Tokamaks have been built and operated for the past six decades. They are today’s most advanced fusion device design.

** “Total construction work scope,” as used in ITER’s project performance metrics, includes design, component manufacturing, building construction, shipping and delivery, assembly, and installation.

It is an extraordinary project on many levels as Henry Fountain notes in a March 27, 2017 article for the New York Times (Note: Links have been removed),

At a dusty construction site here amid the limestone ridges of Provence, workers scurry around immense slabs of concrete arranged in a ring like a modern-day Stonehenge.

It looks like the beginnings of a large commercial power plant, but it is not. The project, called ITER, is an enormous, and enormously complex and costly, physics experiment. But if it succeeds, it could determine the power plants of the future and make an invaluable contribution to reducing planet-warming emissions.

ITER, short for International Thermonuclear Experimental Reactor (and pronounced EAT-er), is being built to test a long-held dream: that nuclear fusion, the atomic reaction that takes place in the sun and in hydrogen bombs, can be controlled to generate power.

ITER will produce heat, not electricity. But if it works — if it produces more energy than it consumes, which smaller fusion experiments so far have not been able to do — it could lead to plants that generate electricity without the climate-affecting carbon emissions of fossil-fuel plants or most of the hazards of existing nuclear reactors that split atoms rather than join them.

Success, however, has always seemed just a few decades away for ITER. The project has progressed in fits and starts for years, plagued by design and management problems that have led to long delays and ballooning costs.

ITER is moving ahead now, with a director-general, Bernard Bigot, who took over two years ago after an independent analysis that was highly critical of the project. Dr. Bigot, who previously ran France’s atomic energy agency, has earned high marks for resolving management problems and developing a realistic schedule based more on physics and engineering and less on politics.

The site here is now studded with tower cranes as crews work on the concrete structures that will support and surround the heart of the experiment, a doughnut-shaped chamber called a tokamak. This is where the fusion reactions will take place, within a plasma, a roiling cloud of ionized atoms so hot that it can be contained only by extremely strong magnetic fields.

Here’s a rendering of the proposed reactor,

Source: ITER Organization

It seems the folks at the New York Times decided to remove the notes which help make sense of this image. However, it does get the idea across.

If I read the article rightly, the official cost in March 2017 was around 22 B Euros and more will likely be needed. You can read Fountain’s article for more information about fusion and ITER or go to the ITER website.

I could have sworn a local (Vancouver area) company called General Fusion was involved in the ITER project but I can’t track down any sources for confirmation. The sole connection I could find is in a documentary about fusion technology,

Here’s a little context for the film from a July 4, 2017 General Fusion news release (Note: A link has been removed),

A new documentary featuring General Fusion has captured the exciting progress in fusion across the public and private sectors.

Let There Be Light made its international premiere at the South By Southwest (SXSW) music and film festival in March [2017] to critical acclaim. The film was quickly purchased by Amazon Video, where it will be available for more than 70 million users to stream.

Let There Be Light follows scientists at General Fusion, ITER and Lawrenceville Plasma Physics in their pursuit of a clean, safe and abundant source of energy to power the world.

The feature length documentary has screened internationally across Europe and North America. Most recently it was shown at the Hot Docs film festival in Toronto, where General Fusion founder and Chief Scientist Dr. Michel Laberge joined fellow fusion physicist Dr. Mark Henderson from ITER at a series of Q&A panels with the filmmakers.

Laberge and Henderson were also interviewed by the popular CBC radio science show Quirks and Quarks, discussing different approaches to fusion, its potential benefits, and the challenges it faces.

It is yet to be confirmed when the film will be release for streaming, check Amazon Video for details.

You can find out more about General Fusion here.

Brief final comment

ITER is a breathtaking effort but if you’ve read about other large scale projects such as building a railway across the Canadian Rocky Mountains, establishing telecommunications in an  astonishing number of countries around the world, getting someone to the moon, eliminating small pox, building the pyramids, etc., it seems standard operating procedure both for the successes I’ve described and for the failures we’ve forgotten. Where ITER will finally rest on the continuum between success and failure is yet to be determined but the problems experienced so far are not necessarily a predictor.

I wish the engineers, scientists, visionaries, and others great success with finding better ways to produce energy.

Of musical parodies, Despacito, and evolution

What great timing, I just found out about a musical science parody featuring evolution and biology and learned of the latest news about the study of evolution on one of the islands in the Galapagos (where Charles Darwin made some of his observations). Thanks to Stacey Johnson for her November 24, 2017 posting on the Signals blog for featuring Evo-Devo (Despacito Biology Parody), an A Capella Science music video from Tim Blais,

Now, for the latest regarding the Galapagos and evolution (from a November 24, 2017 news item on ScienceDaily),

The arrival 36 years ago of a strange bird to a remote island in the Galapagos archipelago has provided direct genetic evidence of a novel way in which new species arise.

In this week’s issue of the journal Science, researchers from Princeton University and Uppsala University in Sweden report that the newcomer belonging to one species mated with a member of another species resident on the island, giving rise to a new species that today consists of roughly 30 individuals.

The study comes from work conducted on Darwin’s finches, which live on the Galapagos Islands in the Pacific Ocean. The remote location has enabled researchers to study the evolution of biodiversity due to natural selection.

The direct observation of the origin of this new species occurred during field work carried out over the last four decades by B. Rosemary and Peter Grant, two scientists from Princeton, on the small island of Daphne Major.

A November 23, 2017 Princeton University news release on EurekAlert, which originated the news item, provides more detail,

“The novelty of this study is that we can follow the emergence of new species in the wild,” said B. Rosemary Grant, a senior research biologist, emeritus, and a senior biologist in the Department of Ecology and Evolutionary Biology. “Through our work on Daphne Major, we were able to observe the pairing up of two birds from different species and then follow what happened to see how speciation occurred.”

In 1981, a graduate student working with the Grants on Daphne Major noticed the newcomer, a male that sang an unusual song and was much larger in body and beak size than the three resident species of birds on the island.

“We didn’t see him fly in from over the sea, but we noticed him shortly after he arrived. He was so different from the other birds that we knew he did not hatch from an egg on Daphne Major,” said Peter Grant, the Class of 1877 Professor of Zoology, Emeritus, and a professor of ecology and evolutionary biology, emeritus.

The researchers took a blood sample and released the bird, which later bred with a resident medium ground finch of the species Geospiz fortis, initiating a new lineage. The Grants and their research team followed the new “Big Bird lineage” for six generations, taking blood samples for use in genetic analysis.

In the current study, researchers from Uppsala University analyzed DNA collected from the parent birds and their offspring over the years. The investigators discovered that the original male parent was a large cactus finch of the species Geospiza conirostris from Española island, which is more than 100 kilometers (about 62 miles) to the southeast in the archipelago.

The remarkable distance meant that the male finch was not able to return home to mate with a member of his own species and so chose a mate from among the three species already on Daphne Major. This reproductive isolation is considered a critical step in the development of a new species when two separate species interbreed.

The offspring were also reproductively isolated because their song, which is used to attract mates, was unusual and failed to attract females from the resident species. The offspring also differed from the resident species in beak size and shape, which is a major cue for mate choice. As a result, the offspring mated with members of their own lineage, strengthening the development of the new species.

Researchers previously assumed that the formation of a new species takes a very long time, but in the Big Bird lineage it happened in just two generations, according to observations made by the Grants in the field in combination with the genetic studies.

All 18 species of Darwin’s finches derived from a single ancestral species that colonized the Galápagos about one to two million years ago. The finches have since diversified into different species, and changes in beak shape and size have allowed different species to utilize different food sources on the Galápagos. A critical requirement for speciation to occur through hybridization of two distinct species is that the new lineage must be ecologically competitive — that is, good at competing for food and other resources with the other species — and this has been the case for the Big Bird lineage.

“It is very striking that when we compare the size and shape of the Big Bird beaks with the beak morphologies of the other three species inhabiting Daphne Major, the Big Birds occupy their own niche in the beak morphology space,” said Sangeet Lamichhaney, a postdoctoral fellow at Harvard University and the first author on the study. “Thus, the combination of gene variants contributed from the two interbreeding species in combination with natural selection led to the evolution of a beak morphology that was competitive and unique.”

The definition of a species has traditionally included the inability to produce fully fertile progeny from interbreeding species, as is the case for the horse and the donkey, for example. However, in recent years it has become clear that some closely related species, which normally avoid breeding with each other, do indeed produce offspring that can pass genes to subsequent generations. The authors of the study have previously reported that there has been a considerable amount of gene flow among species of Darwin’s finches over the last several thousands of years.

One of the most striking aspects of this study is that hybridization between two distinct species led to the development of a new lineage that after only two generations behaved as any other species of Darwin’s finches, explained Leif Andersson, a professor at Uppsala University who is also affiliated with the Swedish University of Agricultural Sciences and Texas A&M University. “A naturalist who came to Daphne Major without knowing that this lineage arose very recently would have recognized this lineage as one of the four species on the island. This clearly demonstrates the value of long-running field studies,” he said.

It is likely that new lineages like the Big Birds have originated many times during the evolution of Darwin’s finches, according to the authors. The majority of these lineages have gone extinct but some may have led to the evolution of contemporary species. “We have no indication about the long-term survival of the Big Bird lineage, but it has the potential to become a success, and it provides a beautiful example of one way in which speciation occurs,” said Andersson. “Charles Darwin would have been excited to read this paper.”

Here’s a link to and a citation for the paper,

Rapid hybrid speciation in Darwin’s finches by Sangeet Lamichhaney, Fan Han, Matthew T. Webster, Leif Andersson, B. Rosemary Grant, Peter R. Grant. Science 23 Nov 2017: eaao4593 DOI: 10.1126/science.aao4593

This paper is behind a paywall.

Happy weekend! And for those who love their Despacito, there’s this parody featuring three Italians in a small car (thanks again to Stacey Johnson’s blog posting),

Predictive policing in Vancouver—the first jurisdiction in Canada to employ a machine learning system for property theft reduction

Predictive policing has come to Canada, specifically, Vancouver. A July 22, 2017 article by Matt Meuse for the Canadian Broadcasting Corporation (CBC) news online describes the new policing tool,

The Vancouver Police Department is implementing a city-wide “predictive policing” system that uses machine learning to prevent break-ins by predicting where they will occur before they happen — the first of its kind in Canada.

Police chief Adam Palmer said that, after a six-month pilot project in 2016, the system is now accessible to all officers via their cruisers’ onboard computers, covering the entire city.

“Instead of officers just patrolling randomly throughout the neighbourhood, this will give them targeted areas it makes more sense to patrol in because there’s a higher likelihood of crime to occur,” Palmer said.

 

Things got off to a slow start as the system familiarized itself [during a 2016 pilot project] with the data, and floundered in the fall due to unexpected data corruption.

But Special Const. Ryan Prox said the system reduced property crime by as much as 27 per cent in areas where it was tested, compared to the previous four years.

The accuracy of the system was also tested by having it generate predictions for a given day, and then watching to see what happened that day without acting on the predictions.

Palmer said the system was getting accuracy rates between 70 and 80 per cent.

When a location is identified by the system, Palmer said officers can be deployed to patrol that location. …

“Quite often … that visible presence will deter people from committing crimes [altogether],” Palmer said.

Though similar systems are used in the United States, Palmer said the system is the first of its kind in Canada, and was developed specifically for the VPD.

While the current focus is on residential break-ins, Palmer said the system could also be tweaked for use with car theft — though likely not with violent crime, which is far less predictable.

Palmer dismissed the inevitable comparison to the 2002 Tom Cruise film Minority Report, in which people are arrested to prevent them from committing crimes in the future.

“We’re not targeting people, we’re targeting locations,” Palmer said. “There’s nothing dark here.”

If you want to get a sense of just how dismissive Chief Palmer was, there’s a July 21, 2017 press conference (run time: approx. 21 mins.) embedded with a media release of the same date. The media release offered these details,

The new model is being implemented after the VPD ran a six-month pilot study in 2016 that contributed to a substantial decrease in residential break-and-enters.

The pilot ran from April 1 to September 30, 2016. The number of residential break-and enters during the test period was compared to the monthly average over the same period for the previous four years (2012 to 2015). The highest drop in property crime – 27 per cent – was measured in June.

The new model provides data in two-hour intervals for locations where residential and commercial break-and-enters are anticipated. The information is for 100-metre and 500-metre zones. Police resources can be dispatched to that area on foot or in patrol cars, to provide a visible presence to deter thieves.

The VPD’s new predictive policing model is built on GEODASH – an advanced machine-learning technology that was implemented by the VPD in 2015. A public version of GEODASH was introduced in December 2015 and is publicly available on vpd.ca. It retroactively plots the location of crimes on a map to provide a general idea of crime trends to the public.

I wish Chief Palmer had been a bit more open to discussion about the implications of ‘predictive policing’. In the US where these systems have been employed in various jurisdictions, there’s some concern arising after an almost euphoric initial response as a Nov. 21, 2016 article by Logan Koepke for the slate.com notes (Note: Links have been removed),

When predictive policing systems began rolling out nationwide about five years ago, coverage was often uncritical and overly reliant on references to Minority Report’s precog system. The coverage made predictive policing—the computer systems that attempt to use data to forecast where crime will happen or who will be involved—seem almost magical.

Typically, though, articles glossed over Minority Report’s moral about how such systems can go awry. Even Slate wasn’t immune, running a piece in 2011 called “Time Cops” that said, when it came to these systems, “Civil libertarians can rest easy.”

This soothsaying language extended beyond just media outlets. According to former New York City Police Commissioner William Bratton, predictive policing is the “wave of the future.” Microsoft agrees. One vendor even markets its system as “better than a crystal ball.” More recent coverage has rightfully been more balanced, skeptical, and critical. But many still seem to miss an important point: When it comes to predictive policing, what matters most isn’t the future—it’s the past.

Some predictive policing systems incorporate information like the weather, a location’s proximity to a liquor store, or even commercial data brokerage information. But at their core, they rely either mostly or entirely on historical crime data held by the police. Typically, these are records of reported crimes—911 calls or “calls for service”—and other crimes the police detect. Software automatically looks for historical patterns in the data, and uses those patterns to make its forecasts—a process known as machine learning.

Intuitively, it makes sense that predictive policing systems would base their forecasts on historical crime data. But historical crime data has limits. Criminologists have long emphasized that crime reports—and other statistics gathered by the police—do not necessarily offer an accurate picture of crime in a community. The Department of Justice’s National Crime Victimization Survey estimates that from 2006 to 2010, 52 percent of violent crime went unreported to police, as did 60 percent of household property crime. Essentially: Historical crime data is a direct record of how law enforcement responds to particular crimes, rather than the true rate of crime. Rather than predicting actual criminal activity, then, the current systems are probably better at predicting future police enforcement.

Koepke goes on to cover other potential issues with ‘predicitive policing’ in this thoughtful piece. He also co-authored an August 2016 report, Stuck in a Pattern; Early evidence on “predictive” policing and civil rights.

There seems to be increasing attention on machine learning and bias as noted in my May 24, 2017 posting where I provide links to other FrogHeart postings on the topic and there’s this Feb. 28, 2017 posting about a new regional big data sharing project, the Cascadia Urban Analytics Cooperative where I mention Cathy O’Neil (author of the book, Weapons of Math Destruction) and her critique in a subsection titled: Algorithms and big data.

I would like to see some oversight and some discussion in Canada about this brave new world of big data.

One final comment, it is possible to get access to the Vancouver Police Department’s data through the City of Vancouver’s Open Data Catalogue (home page).

A jellyfish chat on November 28, 2017 at Café Scientifique Vancouver get together

Café Scientifique Vancouver sent me an announcement (via email) about their upcoming event,

We are pleased to announce our next café which will happen on TUESDAY,
NOVEMBER 28TH at 7:30PM in the back room of YAGGER'S DOWNTOWN (433 W
Pender).

JELLYFISH – FRIEND, FOE, OR FOOD?

Did you know that in addition to stinging swimmers, jellyfish also cause
extensive damage to fisheries and coastal power plants? As threats such
as overfishing, pollution, and climate change alter the marine
environment, recent media reports are proclaiming that jellyfish are
taking over the oceans. Should we hail to our new jellyfish overlords or
do we need to examine the evidence behind these claims? Join Café
Scientifique on Nov. 28, 2017 to learn everything you ever wanted to
know about jellyfish, and find out if jelly burgers are coming soon to a
menu near you.

Our speaker for the evening will be DR. LUCAS BROTZ, a Postdoctoral
Research Fellow with the Sea Around Us at UBC’s Institute for the
Oceans and Fisheries. Lucas has been studying jellyfish for more than a
decade, and has been called “Canada’s foremost jellyfish
researcher” by CBC Nature of Things host Dr. David Suzuki. Lucas has
participated in numerous international scientific collaborations, and
his research has been featured in more than 100 media outlets including
Nature News, The Washington Post, and The New York Times. He recently
received the Michael A. Bigg award for highly significant student
research as part of the Coastal Ocean Awards at the Vancouver Aquarium.

We hope to see you there!

You can find out more about Lucas Brotz here and about Sea Around Us here.

For anyone who’s curious about the jellyfish ‘issue’, there’s a November 8, 2017 Norwegian University of Science and Technology press release on AlphaGallileo or on EurekAlert, which provides insight into the problems and the possibilities,

Jellyfish could be a resource in producing microplastic filters, fertilizer or fish feed. A new 6 million euro project called GoJelly, funded by the EU and coordinated by the GEOMAR Helmholtz Centre for Ocean Research, Germany and including partners at the Norwegian University of Science and Technology (NTNNU) and SINTEF [headquartered in Trondheim, Norway, is the largest independent research organisation in Scandinavia; more about SINTEF in its Wikipedia entry], hopes to turn jellyfish from a nuisance into a useful product.

Global climate change and the human impact on marine ecosystems has led to dramatic decreases in the number of fish in the ocean. It has also had an unforseen side effect: because overfishing decreases the numbers of jellyfish competitors, their blooms are on the rise.

The GoJelly project, coordinated by the GEOMAR Helmholtz Centre for Ocean Research, Germany, would like to transform problematic jellyfish into a resource that can be used to produce microplastic filter, fertilizer or fish feed. The EU has just approved funding of EUR 6 million over 4 years to support the project through its Horizon 2020 programme.

Rising water temperatures, ocean acidification and overfishing seem to favour jellyfish blooms. More and more often, they appear in huge numbers that have already destroyed entire fish farms on European coasts and blocked cooling systems of power stations near the coast. A number of jellyfish species are poisonous, while some tropical species are even among the most toxic animals on earth.

“In Europe alone, the imported American comb jelly has a biomass of one billion tons. While we tend to ignore the jellyfish there must be other solutions,” says Jamileh Javidpour of GEOMAR, initiator and coordinator of the GoJelly project, which is a consortium of 15 scientific institutions from eight countries led by the GEOMAR Helmholtz Centre for Ocean Research in Kiel.

The project will first entail exploring the life cycle of a number of jellyfish species. A lack of knowledge about life cycles makes it is almost impossible to predict when and why a large jellyfish bloom will occur. “This is what we want to change so that large jellyfish swarms can be caught before they reach the coasts,” says Javidpour.

At the same time, the project partners will also try to answer the question of what to do with jellyfish once they have been caught. One idea is to use the jellyfish to battle another, man-made threat.

“Studies have shown that mucus of jellyfish can bind microplastic. Therefore, we want to test whether biofilters can be produced from jellyfish. These biofilters could then be used in sewage treatment plants or in factories where microplastic is produced,” the GoJelly researchers say.

Jellyfish can also be used as fertilizers for agriculture or as aquaculture feed. “Fish in fish farms are currently fed with captured wild fish, which does not reduce the problem of overfishing, but increases it. Jellyfish as feed would be much more sustainable and would protect natural fish stocks,” says the GoJelly team.

Another option is using jellyfish as food for humans. “In some cultures, jellyfish are already on the menu. As long as the end product is no longer slimy, it could also gain greater general acceptance,” said Javidpour. Finally yet importantly, jellyfish contain collagen, a substance very much sought after in the cosmetics industry.

Project partners from the Norwegian University of Science and Technology, led by Nicole Aberle-Malzahn, and SINTEF Ocean, led by Rachel Tiller, will analyse how abiotic (hydrography, temperature), biotic (abundance, biomass, ecology, reproduction) and biochemical parameters (stoichiometry, food quality) affect the initiation of jellyfish blooms.

Based on a comprehensive analysis of triggering mechanisms, origin of seed populations and ecological modelling, the researchers hope to be able to make more reliable predictions on jellyfish bloom formation of specific taxa in the GoJelly target areas. This knowledge will allow sustainable harvesting of jellyfish communities from various Northern and Southern European populations.

This harvest will provide a marine biomass of unknown potential that will be explored by researchers at SINTEF Ocean, among others, to explore the possible ways to use the material.

A team from SINTEF Ocean’s strategic program Clean Ocean will also work with European colleagues on developing a filter from the mucus of the jellyfish that will catch microplastics from household products (which have their source in fleece sweaters, breakdown of plastic products or from cosmetics, for example) and prevent these from entering the marine ecosystem.

Finally, SINTEF Ocean will examine the socio-ecological system and games, where they will explore the potentials of an emerging international management regime for a global effort to mitigate the negative effects of microplastics in the oceans.

“Jellyfish can be used for many purposes. We see this as an opportunity to use the potential of the huge biomass drifting right in front of our front door,” Javidpour said.

You can find out more about GoJelly on their Twitter account.

Art/science events in Vancouver, Canada (Nov. 22, 2017) and Toronto (Dec. 1, 2017)

The first event I’m highlighting is the Curiosity Collider Cafe’s Nov. 22, 2017 event in Vancouver (Canada), from a November 14, 2017 announcement received via email,

Art, science, & neuroscience. Visualizing/sonifying particle collisions. Colors from nature. Sci-art career adventure. Our #ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science.

Meet, discover, connect, create. Are you curious?

Join us at “Collider Cafe: Art. Science. Interwoven.” to explore how art and science intersect in the exploration of curiosity.

When: 8:00pm on Wednesday, November 22, 2017.

Doors open at 7:30pm.

Where: Café Deux Soleils.. 2096 Commercial Drive, Vancouver, BC (Google Map).

Cost: $5-10 (sliding scale) cover at the door.

Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events.

With speakers:

Caitlin Ffrench (painter, writer, and textile artist) – Colours from Nature

Claudia Krebs (neuroanatomy professor) – Does the brain really differentiate between science and art?

Derek Tan (photographer, illustrator, and multimedia designer) – Design for Science: How I Got My Job E

Eli York (neuroscience researcher) – Imaging the brain’s immune system

Leó Stefánsson (multimedia artist) – Experiencing Data: Visualizing and Sonifying Particle Collisions

Follow updates on twitter via @ccollider or #ColliderCafe.

Head to the Facebook event page – let us know you are coming and share this event with others!

Then in Toronto, there’s the ArtSci Salon with an event about what they claim is one of the hottest topics today: STEAM. For the uninitiated, the acronym is for Science, Technology, Engineering, Art, and Mathematics which some hope will supersede STEM (Science, Technology, Engineering, and Mathematics). Regardless, here’s more from a November 13, 2017 Art/Sci Salon announcement received via email,

The ArtSci Salon presents:

What does A stand for in STEAM?

Date: December 1, 2017

Time: 5:30-7:30 pm

Location: The Fields Institute for Research in Mathematical Sciences
222 College Street, Toronto, ON

Please, RSVP here
http://bit.ly/2zH8nrN

Grouping four broadly defined disciplinary clusters –– Science, Technology, Engineering and Mathematics –– STEM has come to stand for governments’ and institutions’ attempt to champion ambitious programs geared towards excellence and innovation while providing hopeful students with “useful” education leading to “real jobs”. But in recent years education advocates have reiterated the crucial role of the arts in achieving such excellence. A has been added to STEM…

But what does A stand for in STEAM? What is its role? and how is it interpreted by those involved in STEM education, by arts practitioners and educators and by science communicators? It turns out that A has different roles, meanings, applications, interpretations…

Please, join us for an intriguing discussion on STEAM education and STEAM approaches. Our guests represent different experiences, backgrounds and areas of research. Your participation will make their contributions even richer

With:

Linda Duvall (Visual and Media Artist)

Richard Lachman (Associate Professor, RTA School of Media, Ryerson University)

Jan McMillin (Teacher/Librarian, Queen Victoria P.S.)

Jenn Stroud Rossmann (Professor, Mechanical Engineering – Lafayette College)

Lauren Williams (Special Collections Librarian – Thomas Fisher Rare Book Library

Bios

Linda Duvall is a Saskatoon-based visual artist whose work exists at the intersection of collaboration, performance and conversation. Her hybrid practice addresses recurring themes of connection to place, grief and loss, and the many meanings of exclusion and absence.

Richard Lachman directs the Zone Learning network of incubators for Ryerson University, Research Development for the Faculty of Communication and Design, and the Experiential Media Institute. His research interests include transmedia storytelling, digital documentaries, augmented/locative/VR experiences, mixed realities, and collaborative design thinking.

Jan McMillin is a Teacher Librarian at the TDSB. Over the last 3 years she has led a team to organize a S.T.E.A.M. Conference for approximately 180 Intermediate students from Queen Victoria P.S. and Parkdale Public. The purpose of the conference is to inspire these young people and to show them what they can also aspire to. Queen Victoria has a history of promoting the Arts in Education and so the conference was also partly to expand the notion of STEM to incorporate the Arts and creativity

Jenn Stroud Rossmann is a professor of mechanical engineering at Lafayette College. Her research interests include cardiovascular and respiratory fluid mechanics and interdiscplinary pedagogies. She co-authored an innovative textbook, Introduction to Engineering Mechanics: A Continuum Approach (CRC Press, Second Edition, 2015), and writes the essay series “An Engineer Reads a Novel” for Public Books. She is also a fiction writer whose work (in such journals as Cheap Pop, Literary Orphans, Tahoma Literary Review) has earned several Pushcart Prize nominations and other honors; her first novel is forthcoming in Fall 2018 from 7.13 Books.

Lauren Williams is Special Collections Librarian in the Department of Rare Books and Special Collections, Thomas Fisher Rare Book Library. Lauren is a graduate of the University of Toronto iSchool, where she specialized in Library and Information Science and participated in the Book History and Print Culture Collaborative Program.

Enjoy!

A cheaper way to make artificial organs

In the quest to develop artificial organs, the University of British Columbia (UBC) is the not the first research institution that comes to my mind. It seems I may need to reevaluate now that UBC (Okanagan) has announced some work on bio-inks and artificial organs in a Sept. 12, 2017 news  release (also on EurekAlert) by Patty Wellborn,,

A new bio-ink that may support a more efficient and inexpensive fabrication of human tissues and organs has been created by researchers at UBC’s Okanagan campus.

Keekyoung Kim, an assistant professor at UBC Okanagan’s School of Engineering, says this development can accelerate advances in regenerative medicine.

Using techniques like 3D printing, scientists are creating bio-material products that function alongside living cells. These products are made using a number of biomaterials including gelatin methacrylate (GelMA), a hydrogel that can serve as a building block in bio-printing. This type of bio-material—called bio-ink—are made of living cells, but can be printed and molded into specific organ or tissue shapes.

The UBC team analyzed the physical and biological properties of three different GelMA hydrogels—porcine skin, cold-water fish skin and cold-soluble gelatin. They found that hydrogel made from cold-soluble gelatin (gelatin which dissolves without heat) was by far the best performer and a strong candidate for future 3D organ printing.

“A big drawback of conventional hydrogel is its thermal instability. Even small changes in temperature cause significant changes in its viscosity or thickness,” says Kim. “This makes it problematic for many room temperature bio-fabrication systems, which are compatible with only a narrow range of hydrogel viscosities and which must generate products that are as uniform as possible if they are to function properly.”

Kim’s team created two new hydrogels—one from fish skin, and one from cold-soluble gelatin—and compared their properties to those of porcine skin GelMA. Although fish skin GelMA had some benefits, cold-soluble GelMA was the top overall performer. Not only could it form healthy tissue scaffolds, allowing cells to successfully grow and adhere to it, but it was also thermally stable at room temperature.

The UBC team also demonstrated that cold-soluble GelMA produces consistently uniform droplets at temperatures, thus making it an excellent choice for use in 3D bio-printing.

“We hope this new bio-ink will help researchers create improved artificial organs and lead to the development of better drugs, tissue engineering and regenerative therapies,” Kim says. “The next step is to investigate whether or not cold-soluble GelMA-based tissue scaffolds are can be used long-term both in the laboratory and in real-world transplants.”

Three times cheaper than porcine skin gelatin, cold-soluble gelatin is used primarily in culinary applications.

Here’s a link to and a citation for the paper,

Comparative study of gelatin methacrylate hydrogels from different sources for biofabrication applications by Zongjie Wang, Zhenlin Tian, Fredric Menard, and Keekyoung Kim. Biofabrication, Volume 9, Number 4 Special issue on Bioinks https://doi.org/10.1088/1758-5090/aa83cf Published 21 August 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.

Julie Payette, Canada’s Governor General, takes on science deniers and bogus science at 2017 Canadian Science Policy Conference

On the first day of the 2017 Canadian Science Policy Conference (Nov. 1 -3, 2017 in Ottawa, Ontario), Governor General Julie Payette’s speech encouraged listeners to grapple with science deniers, fake news, and more (from a Nov. 2, 2017 article by Mia Rabson in the Huffington Post, Canada edition),

Payette was the keynote speaker at the ninth annual Canadian Science Policy Convention in Ottawa Wednesday night [Nov. 1, 2017] where she urged her friends and former colleagues to take responsibility to shut down the misinformation about everything from health and medicine to climate change and even horoscopes that has flourished with the explosion of digital media.

“Can you believe that still today in learned society, in houses of government, unfortunately, we’re still debating and still questioning whether humans have a role in the Earth warming up or whether even the Earth is warming up, period,” she asked, her voice incredulous.

She generated giggles and even some guffaws from the audience when she said too many people still believe “taking a sugar pill will cure cancer if you will it good enough and that your future and every single one of the people here’s personalities can be determined by looking at planets coming in front of invented constellations.”

Payette was trained as a computer engineer and later became an astronaut and licensed pilot and in 1999 was the first Canadian to board the International Space Station.

Mia Rabson in another Nov. 2, 2017 article (this time for 680news.com) presents responses to the speech from various interested parties,

According to popular Canadian astrologer Georgia Nicols, Canada’s Governor General should be doing what she can to “keep the peace” with loved ones today and avoid the “planetary vibe” that is urging people to engage in power struggles and disputes.

The advice, contained in Julie Payette’s Nov. 2 [2017] horoscope on Nicols’ website, might have come a day late, though Payette likely wouldn’t have listened to it anyway.

The Governor General made clear in a speech to scientists at an Ottawa convention Wednesday she has a very low opinion of the validity of horoscopes, people who believe in creationism or those who don’t believe in climate change.

Emmett Macfarlane, a political professor at the University of Waterloo said nothing stops a governor general from stating opinions and while there have been unwritten traditions against it, Payette’s most recent predecessors did not always hold their tongues.

Conservative political strategist Alise Mills said Payette went way over the line with her speech, which she characterized as not only political but “mean-spirited.”

“I definitely agree science is key but I think there is a better way to do that without making fun of other people,” Mills said.

There isn’t a lot of data on horoscope and astrology beliefs in Canada but a 2005 Gallup poll suggested around one in four Canadians believed in astrology.

Prime Minister Justin Trudeau didn’t seem to have any issue with what Payette said, saying his government and Canadians understand the value of science.

Mills said Payette wasn’t just promoting science, she was mocking people with religious beliefs, and specifically, evangelical Christians who don’t believe evolutionary science.

Astrologer Nicols said she had “no wish to take on a woman who is as accomplished as Julie Payette,” whom she notes is a “feisty Libra with three planets in Scorpio.”

But she did suggest Payette would be better to stick to what she knows.

“Astrology is not the stuff of horoscopes in newspapers, albeit I do write them,” wrote Nicols in an e-mail. “It is actually a complex study based on mathematics. Not fairy dust falling from the stars.”

There is one thing I find a bit surprising, Payette doesn’t seem to have taken on the vaccination issue. In any event, it looks like the conference had an exciting start.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Robots in Vancouver and in Canada (one of two)

This piece just started growing. It started with robot ethics, moved on to sexbots and news of an upcoming Canadian robotics roadmap. Then, it became a two-part posting with the robotics strategy (roadmap) moving to part two along with robots and popular culture and a further  exploration of robot and AI ethics issues..

What is a robot?

There are lots of robots, some are macroscale and others are at the micro and nanoscales (see my Sept. 22, 2017 posting for the latest nanobot). Here’s a definition from the Robot Wikipedia entry that covers all the scales. (Note: Links have been removed),

A robot is a machine—especially one programmable by a computer— capable of carrying out a complex series of actions automatically.[2] Robots can be guided by an external control device or the control may be embedded within. Robots may be constructed to take on human form but most robots are machines designed to perform a task with no regard to how they look.

Robots can be autonomous or semi-autonomous and range from humanoids such as Honda’s Advanced Step in Innovative Mobility (ASIMO) and TOSY’s TOSY Ping Pong Playing Robot (TOPIO) to industrial robots, medical operating robots, patient assist robots, dog therapy robots, collectively programmed swarm robots, UAV drones such as General Atomics MQ-1 Predator, and even microscopic nano robots. [emphasis mine] By mimicking a lifelike appearance or automating movements, a robot may convey a sense of intelligence or thought of its own.

We may think we’ve invented robots but the idea has been around for a very long time (from the Robot Wikipedia entry; Note: Links have been removed),

Many ancient mythologies, and most modern religions include artificial people, such as the mechanical servants built by the Greek god Hephaestus[18] (Vulcan to the Romans), the clay golems of Jewish legend and clay giants of Norse legend, and Galatea, the mythical statue of Pygmalion that came to life. Since circa 400 BC, myths of Crete include Talos, a man of bronze who guarded the Cretan island of Europa from pirates.

In ancient Greece, the Greek engineer Ctesibius (c. 270 BC) “applied a knowledge of pneumatics and hydraulics to produce the first organ and water clocks with moving figures.”[19][20] In the 4th century BC, the Greek mathematician Archytas of Tarentum postulated a mechanical steam-operated bird he called “The Pigeon”. Hero of Alexandria (10–70 AD), a Greek mathematician and inventor, created numerous user-configurable automated devices, and described machines powered by air pressure, steam and water.[21]

The 11th century Lokapannatti tells of how the Buddha’s relics were protected by mechanical robots (bhuta vahana yanta), from the kingdom of Roma visaya (Rome); until they were disarmed by King Ashoka. [22] [23]

In ancient China, the 3rd century text of the Lie Zi describes an account of humanoid automata, involving a much earlier encounter between Chinese emperor King Mu of Zhou and a mechanical engineer known as Yan Shi, an ‘artificer’. Yan Shi proudly presented the king with a life-size, human-shaped figure of his mechanical ‘handiwork’ made of leather, wood, and artificial organs.[14] There are also accounts of flying automata in the Han Fei Zi and other texts, which attributes the 5th century BC Mohist philosopher Mozi and his contemporary Lu Ban with the invention of artificial wooden birds (ma yuan) that could successfully fly.[17] In 1066, the Chinese inventor Su Song built a water clock in the form of a tower which featured mechanical figurines which chimed the hours.

The beginning of automata is associated with the invention of early Su Song’s astronomical clock tower featured mechanical figurines that chimed the hours.[24][25][26] His mechanism had a programmable drum machine with pegs (cams) that bumped into little levers that operated percussion instruments. The drummer could be made to play different rhythms and different drum patterns by moving the pegs to different locations.[26]

In Renaissance Italy, Leonardo da Vinci (1452–1519) sketched plans for a humanoid robot around 1495. Da Vinci’s notebooks, rediscovered in the 1950s, contained detailed drawings of a mechanical knight now known as Leonardo’s robot, able to sit up, wave its arms and move its head and jaw.[28] The design was probably based on anatomical research recorded in his Vitruvian Man. It is not known whether he attempted to build it.

In Japan, complex animal and human automata were built between the 17th to 19th centuries, with many described in the 18th century Karakuri zui (Illustrated Machinery, 1796). One such automaton was the karakuri ningyō, a mechanized puppet.[29] Different variations of the karakuri existed: the Butai karakuri, which were used in theatre, the Zashiki karakuri, which were small and used in homes, and the Dashi karakuri which were used in religious festivals, where the puppets were used to perform reenactments of traditional myths and legends.

The term robot was coined by a Czech writer (from the Robot Wikipedia entry; Note: Links have been removed)

‘Robot’ was first applied as a term for artificial automata in a 1920 play R.U.R. by the Czech writer, Karel Čapek. However, Josef Čapek was named by his brother Karel as the true inventor of the term robot.[6][7] The word ‘robot’ itself was not new, having been in Slavic language as robota (forced laborer), a term which classified those peasants obligated to compulsory service under the feudal system widespread in 19th century Europe (see: Robot Patent).[37][38] Čapek’s fictional story postulated the technological creation of artificial human bodies without souls, and the old theme of the feudal robota class eloquently fit the imagination of a new class of manufactured, artificial workers.

I’m particularly fascinated by how long humans have been imagining and creating robots.

Robot ethics in Vancouver

The Westender, has run what I believe is the first article by a local (Vancouver, Canada) mainstream media outlet on the topic of robots and ethics. Tessa Vikander’s Sept. 14, 2017 article highlights two local researchers, Ajung Moon and Mark Schmidt, and a local social media company’s (Hootsuite), analytics director, Nik Pai. Vikander opens her piece with an ethical dilemma (Note: Links have been removed),

Emma is 68, in poor health and an alcoholic who has been told by her doctor to stop drinking. She lives with a care robot, which helps her with household tasks.

Unable to fix herself a drink, she asks the robot to do it for her. What should the robot do? Would the answer be different if Emma owns the robot, or if she’s borrowing it from the hospital?

This is the type of hypothetical, ethical question that Ajung Moon, director of the Open Roboethics Initiative [ORI], is trying to answer.

According to an ORI study, half of respondents said ownership should make a difference, and half said it shouldn’t. With society so torn on the question, Moon is trying to figure out how engineers should be programming this type of robot.

A Vancouver resident, Moon is dedicating her life to helping those in the decision-chair make the right choice. The question of the care robot is but one ethical dilemma in the quickly advancing world of artificial intelligence.

At the most sensationalist end of the scale, one form of AI that’s recently made headlines is the sex robot, which has a human-like appearance. A report from the Foundation for Responsible Robotics says that intimacy with sex robots could lead to greater social isolation [emphasis mine] because they desensitize people to the empathy learned through human interaction and mutually consenting relationships.

I’ll get back to the impact that robots might have on us in part two but first,

Sexbots, could they kill?

For more about sexbots in general, Alessandra Maldonado wrote an Aug. 10, 2017 article for salon.com about them (Note: A link has been removed),

Artificial intelligence has given people the ability to have conversations with machines like never before, such as speaking to Amazon’s personal assistant Alexa or asking Siri for directions on your iPhone. But now, one company has widened the scope of what it means to connect with a technological device and created a whole new breed of A.I. — specifically for sex-bots.

Abyss Creations has been in the business of making hyperrealistic dolls for 20 years, and by the end of 2017, they’ll unveil their newest product, an anatomically correct robotic sex toy. Matt McMullen, the company’s founder and CEO, explains the goal of sex robots is companionship, not only a physical partnership. “Imagine if you were completely lonely and you just wanted someone to talk to, and yes, someone to be intimate with,” he said in a video depicting the sculpting process of the dolls. “What is so wrong with that? It doesn’t hurt anybody.”

Maldonado also embedded this video into her piece,

A friend of mine described it as creepy. Specifically we were discussing why someone would want to programme ‘insecurity’ as a  desirable trait in a sexbot.

Marc Beaulieu’s concept of a desirable trait in a sexbot is one that won’t kill him according to his Sept. 25, 2017 article on Canadian Broadcasting News (CBC) online (Note: Links have been removed),

Harmony has a charming Scottish lilt, albeit a bit staccato and canny. Her eyes dart around the room, her chin dips as her eyebrows raise in coquettish fashion. Her face manages expressions that are impressively lifelike. That face comes in 31 different shapes and 5 skin tones, with or without freckles and it sticks to her cyber-skull with magnets. Just peel it off and switch it out at will. In fact, you can choose Harmony’s eye colour, body shape (in great detail) and change her hair too. Harmony, of course, is a sex bot. A very advanced one. How advanced is she? Well, if you have $12,332 CAD to put towards a talkative new home appliance, REALBOTIX says you could be having a “conversation” and relations with her come January. Happy New Year.

Caveat emptor though: one novel bonus feature you might also get with Harmony is her ability to eventually murder you in your sleep. And not because she wants to.

Dr Nick Patterson, faculty of Science Engineering and Built Technology at Deakin University in Australia is lending his voice to a slew of others warning us to slow down and be cautious as we steadily approach Westworldian levels of human verisimilitude with AI tech. Surprisingly, Patterson didn’t regurgitate the narrative we recognize from the popular sci-fi (increasingly non-fi actually) trope of a dystopian society’s futile resistance to a robocalypse. He doesn’t think Harmony will want to kill you. He thinks she’ll be hacked by a code savvy ne’er-do-well who’ll want to snuff you out instead. …

Embedded in Beaulieu’s article is another video of the same sexbot profiled earlier. Her programmer seems to have learned a thing or two (he no longer inputs any traits as you’re watching),

I guess you could get one for Christmas this year if you’re willing to wait for an early 2018 delivery and aren’t worried about hackers turning your sexbot into a killer. While the killer aspect might seem farfetched, it turns out it’s not the only sexbot/hacker issue.

Sexbots as spies

This Oct. 5, 2017 story by Karl Bode for Techdirt points out that sex toys that are ‘smart’ can easily be hacked for any reason including some mischief (Note: Links have been removed),

One “smart dildo” manufacturer was recently forced to shell out $3.75 million after it was caught collecting, err, “usage habits” of the company’s customers. According to the lawsuit, Standard Innovation’s We-Vibe vibrator collected sensitive data about customer usage, including “selected vibration settings,” the device’s battery life, and even the vibrator’s “temperature.” At no point did the company apparently think it was a good idea to clearly inform users of this data collection.

But security is also lacking elsewhere in the world of internet-connected sex toys. Alex Lomas of Pentest Partners recently took a look at the security in many internet-connected sex toys, and walked away arguably unimpressed. Using a Bluetooth “dongle” and antenna, Lomas drove around Berlin looking for openly accessible sex toys (he calls it “screwdriving,” in a riff off of wardriving). He subsequently found it’s relatively trivial to discover and hijack everything from vibrators to smart butt plugs — thanks to the way Bluetooth Low Energy (BLE) connectivity works:

“The only protection you have is that BLE devices will generally only pair with one device at a time, but range is limited and if the user walks out of range of their smartphone or the phone battery dies, the adult toy will become available for others to connect to without any authentication. I should say at this point that this is purely passive reconnaissance based on the BLE advertisements the device sends out – attempting to connect to the device and actually control it without consent is not something I or you should do. But now one could drive the Hush’s motor to full speed, and as long as the attacker remains connected over BLE and not the victim, there is no way they can stop the vibrations.”

Does that make you think twice about a sexbot?

Robots and artificial intelligence

Getting back to the Vikander article (Sept. 14, 2017), Moon or Vikander or both seem to have conflated artificial intelligence with robots in this section of the article,

As for the building blocks that have thrust these questions [care robot quandary mentioned earlier] into the spotlight, Moon explains that AI in its basic form is when a machine uses data sets or an algorithm to make a decision.

“It’s essentially a piece of output that either affects your decision, or replaces a particular decision, or supports you in making a decision.” With AI, we are delegating decision-making skills or thinking to a machine, she says.

Although we’re not currently surrounded by walking, talking, independently thinking robots, the use of AI [emphasis mine] in our daily lives has become widespread.

For Vikander, the conflation may have been due to concerns about maintaining her word count and for Moon, it may have been one of convenience or a consequence of how the jargon is evolving with ‘robot’ meaning a machine specifically or, sometimes, a machine with AI or AI only.

To be precise, not all robots have AI and not all AI is found in robots. It’s a distinction that may be more important for people developing robots and/or AI but it also seems to make a difference where funding is concerned. In a March 24, 2017 posting about the 2017 Canadian federal budget I noticed this,

… The Canadian Institute for Advanced Research will receive $93.7 million [emphasis mine] to “launch a Pan-Canadian Artificial Intelligence Strategy … (to) position Canada as a world-leading destination for companies seeking to invest in artificial intelligence and innovation.”

This brings me to a recent set of meetings held in Vancouver to devise a Canadian robotics roadmap, which suggests the robotics folks feel they need specific representation and funding.

See: part two for the rest.