Tag Archives: AI

China is world leader in nanotechnology and in other fields too?

State of Chinese nanoscience/nanotechnology

China claims to be the world leader in the field in a white paper announced in an August 29, 2017 Springer Nature press release,

Springer Nature, the National Center for Nanoscience and Technology, China and the National Science Library of the Chinese Academy of Sciences (CAS) released in both Chinese and English a white paper entitled “Small Science in Big China: An overview of the state of Chinese nanoscience and technology” at NanoChina 2017, an international conference on nanoscience and technology held August 28 and 29 in Beijing. The white paper looks at the rapid growth of China’s nanoscience research into its current role as the world’s leader [emphasis mine], examines China’s strengths and challenges, and makes some suggestions for how its contribution to the field can continue to thrive.

The white paper points out that China has become a strong contributor to nanoscience research in the world, and is a powerhouse of nanotechnology R&D. Some of China’s basic research is leading the world. China’s applied nanoscience research and the industrialization of nanotechnologies have also begun to take shape. These achievements are largely due to China’s strong investment in nanoscience and technology. China’s nanoscience research is also moving from quantitative increase to quality improvement and innovation, with greater emphasis on the applications of nanotechnologies.

“China took an initial step into nanoscience research some twenty years ago, and has since grown its commitment at an unprecedented rate, as it has for scientific research as a whole. Such a growth is reflected both in research quantity and, importantly, in quality. Therefore, I regard nanoscience as a window through which to observe the development of Chinese science, and through which we could analyze how that rapid growth has happened. Further, the experience China has gained in developing nanoscience and related technologies is a valuable resource for the other countries and other fields of research to dig deep into and draw on,” said Arnout Jacobs, President, Greater China, Springer Nature.

The white paper explores at China’s research output relative to the rest of the world in terms of research paper output, research contribution contained in the Nano database, and finally patents, providing insight into China’s strengths and expertise in nano research. The white paper also presents the results of a survey of experts from the community discussing the outlook for and challenges to the future of China’s nanoscience research.

China nano research output: strong rise in quantity and quality

In 1997, around 13,000 nanoscience-related papers were published globally. By 2016, this number had risen to more than 154,000 nano-related research papers. This corresponds to a compound annual growth rate of 14% per annum, almost four times the growth in publications across all areas of research of 3.7%. Over the same period of time, the nano-related output from China grew from 820 papers in 1997 to over 52,000 papers in 2016, a compound annual growth rate of 24%.

China’s contribution to the global total has been growing steadily. In 1997, Chinese researchers co-authored just 6% of the nano-related papers contained in the Science Citation Index (SCI). By 2010, this grew to match the output of the United States. They now contribute over a third of the world’s total nanoscience output — almost twice that of the United States.

Additionally, China’s share of the most cited nanoscience papers has kept increasing year on year, with a compound annual growth rate of 22% — more than three times the global rate. It overtook the United States in 2014 and its contribution is now many times greater than that of any other country in the world, manifesting an impressive progression in both quantity and quality.

The rapid growth of nanoscience in China has been enabled by consistent and strong financial support from the Chinese government. As early as 1990, the State Science and Technology Committee, the predecessor of the Ministry of Science and Technology (MOST), launched the Climbing Up project on nanomaterial science. During the 1990s, the National Natural Science Foundation of China (NSFC) also funded nearly 1,000 small-scale projects in nanoscience. In the National Guideline on Medium- and Long-Term Program for Science and Technology Development (for 2006−2020) issued in early 2006 by the Chinese central government, nanoscience was identified as one of four areas of basic research and received the largest proportion of research budget out of the four areas. The brain boomerang, with more and more foreign-trained Chinese researchers returning from overseas, is another contributor to China’s rapid rise in nanoscience.

The white paper clarifies the role of Chinese institutions, including CAS, in driving China’s rise to become the world’s leader in nanoscience. Currently, CAS is the world’s largest producer of high impact nano research, contributing more than twice as many papers in the 1% most-cited nanoscience literature than its closest competitors. In addition to CAS, five other Chinese institutions are ranked among the global top 20 in terms of output of top cited 1% nanoscience papers — Tsinghua University, Fudan University, Zhejiang University, University of Science and Technology of China and Peking University.

Nano database reveals advantages and focus of China’s nano research

The Nano database (http://nano.nature.com) is a comprehensive platform that has been recently developed by Nature Research – part of Springer Nature – which contains nanoscience-related papers published in 167 peer-reviewed journals including Advanced Materials, Nano Letters, Nature, Science and more. Analysis of the Nano database of nanomaterial-containing articles published in top 30 journals during 2014–2016 shows that Chinese scientists explore a wide range of nanomaterials, the five most common of which are nanostructured materials, nanoparticles, nanosheets, nanodevices and nanoporous materials.

In terms of the research of applications, China has a clear leading edge in catalysis research, which is the most popular area of the country’s quality nanoscience papers. Chinese nano researchers also contributed significantly to nanomedicine and energy-related applications. China is relatively weaker in nanomaterials for electronics applications, compared to other research powerhouses, but robotics and lasers are emerging applications areas of nanoscience in China, and nanoscience papers addressing photonics and data storage applications also see strong growth in China. Over 80% of research from China listed in the database explicitly mentions applications of the nanostructures and nanomaterials described, notably higher than from most other leading nations such as the United States, Germany, the UK, Japan and France.

Nano also reveals the extent of China’s international collaborations in nano research. China has seen the percentage of its internationally collaborated papers increasing from 36% in 2014 to 44% in 2016. This level of international collaboration, similar to that of South Korea, is still much lower than that of the western countries, and the rate of growth is also not as fast as those in the United States, France and Germany.

The United States is China’s biggest international collaborator, contributing to 55% of China’s internationally collaborated papers on nanoscience that are included in the top 30 journals in the Nano database. Germany, Australia and Japan follow in a descending order as China’s collaborators on nano-related quality papers.

China’s patent output: topping the world, mostly applied domestically

Analysis of the Derwent Innovation Index (DII) database of Clarivate Analytics shows that China’s accumulative total number of patent applications for the past 20 years, amounting to 209,344 applications, or 45% of the global total, is more than twice as many as that of the United States, the second largest contributor to nano-related patents. China surpassed the United States and ranked the top in the world since 2008.

Five Chinese institutions, including the CAS, Zhejiang University, Tsinghua University, Hon Hai Precision Industry Co., Ltd. and Tianjin University can be found among the global top 10 institutional contributors to nano-related patent applications. CAS has been at the top of the global rankings since 2008, with a total of 11,218 patent applications for the past 20 years. Interestingly, outside of China, most of the other big institutional contributors among the top 10 are commercial enterprises, while in China, research or academic institutions are leading in patent applications.

However, the number of nano-related patents China applied overseas is still very low, accounting for only 2.61% of its total patent applications for the last 20 years cumulatively, whereas the proportion in the United States is nearly 50%. In some European countries, including the UK and France, more than 70% of patent applications are filed overseas.

China has high numbers of patent applications in several popular technical areas for nanotechnology use, and is strongest in patents for polymer compositions and macromolecular compounds. In comparison, nano-related patent applications in the United States, South Korea and Japan are mainly for electronics or semiconductor devices, with the United States leading the world in the cumulative number of patents for semiconductor devices.

Outlook, opportunities and challenges

The white paper highlights that the rapid rise of China’s research output and patent applications has painted a rosy picture for the development of Chinese nanoscience, and in both the traditionally strong subjects and newly emerging areas, Chinese nanoscience shows great potential.

Several interviewed experts in the survey identify catalysis and catalytic nanomaterials as the most promising nanoscience area for China. The use of nanotechnology in the energy and medical sectors was also considered very promising.

Some of the interviewed experts commented that the industrial impact of China’s nanotechnology is limited and there is still a gap between nanoscience research and the industrialization of nanotechnologies. Therefore, they recommended that the government invest more in applied research to drive the translation of nanoscience research and find ways to encourage enterprises to invest more in R&D.

As more and more young scientists enter the field, the competition for research funding is becoming more intense. However, this increasing competition for funding was not found to concern most interviewed young scientists, rather, they emphasized that the soft environment is more important. They recommended establishing channels that allow the suggestions or creative ideas of the young to be heard. Also, some interviewed young researchers commented that they felt that the current evaluation system was geared towards past achievements or favoured overseas experience, and recommended the development of an improved talent selection mechanism to ensure a sustainable growth of China’s nanoscience.

I have taken a look at the white paper and found it to be well written. It also provides a brief but thorough history of nanotechnology/nanoscience even adding a bit of historical information that was new to me. As for the rest of the white paper, it relies on bibliometrics (number of published papers and number of citations) and number of patents filed to lay the groundwork for claiming Chinese leadership in nanotechnology. As I’ve stated many times before, these are problematic measures but as far as I can determine they are almost the only ones we have. Frankly, as a Canadian, it doesn’t much matter to me since Canada no matter how you slice or dice it is always in a lower tier relative to science leadership in major fields. It’s the Americans who might feel inclined to debate leadership with regard to nanotechnology and other major fields and I leave it to to US commentators to take up the cudgels should they be inclined. The big bonuses here are the history, the glimpse into the Chinese perspective on the field of nanotechnology/nanoscience, and the analysis of weaknesses and strengths.

Coming up fast on Google and Amazon

A November 16, 2017 article by Christina Bonnington for Slate explores the possibility that a Chinese tech giant, Baidu,  will provide Google and Amazon serious competition in their quests to dominate world markets (Note: Links have been removed,

raven_h
The company took a playful approach to the form—but it has functional reasons for the design, too. Baidu

 

One of the most interesting companies in tech right now isn’t based in Palo Alto, or San Francisco, or Seattle. Baidu, a Chinese company with headquarters in Beijing, is taking on America’s biggest and most innovative tech titans—with style.

Baidu, a titan in its own right, leapt onto the scene as a competitor to Google in the search engine space. Since then, the company, largely underappreciated here in the U.S., has focused on beefing up its artificial intelligence efforts. Former AI chief Andrew Ng, upon leaving the company in March, credited Baidu’s CEO Robin Li on being one of the first technology leaders to fully appreciate the value of deep learning. Baidu now has a 1,300 person AI group, and that investment in AI has helped the company catch up to older, more established companies like Google and Amazon—both in emerging spaces, such as autonomous vehicles, and in consumer tech, as its latest announcement shows.

On Thursday [November 16, 2017], Baidu debuted its entrants to the popular virtual assistant space: a connected speaker and two robots. Baidu aims for the speaker to compete against options such as Amazon’s Echo line, Google Home, and Apple HomePod. Inside, the $256 device will utilize Baidu’s DuerOS conversational artificial intelligence platform, which is already used in more than 100 different smart home brands’ products. DuerOS will let you use your voice to do things like ask the speaker for information, play music, or hail a cab. Called the Raven H, the speaker includes high-end audio components from Tymphany and a unique design jointly created by acquired startup Raven Tech and Swedish consumer electronics company Teenage Engineering.

While the focus is on exciting new technology products from Baidu, the subtext, such as it is, suggests US companies had best keep an eye on its Chinese competitor(s).

Dutch/Chinese partnership to produce nanoparticles at the touch of a button

Now back to China and nanotechnology leadership and the production of nanoparticles. This announcement was made in a November 17, 2017 news item on Azonano,

Delft University of Technology [Netherlands] spin-off VSPARTICLE enters the booming Chinese market with a radical technology that allows researchers to produce nanoparticles at the push of a button. VSPARTICLE’s nanoparticle generator uses atoms, the worlds’ smallest building blocks, to provide a controllable source of nanoparticles. The start-up from Delft signed a distribution agreement with Bio-Sun to make their VSP-G1 nanoparticle generator available in China.

A November 16, 2017 VSPARTICLE press release, which originated the news item,

“We are honoured to cooperate with VSPARTICLE and bring the innovative VSP-G1 nanoparticle generator into the Chinese market. The VSP-G1 will create new possibilities for researchers in catalysis, aerosol, healthcare and electronics,” says Yinghui Cai, CEO of Bio-Sun.

With an exponential growth in nanoparticle research in the last decade, China is one of the leading countries in the field of nanotechnology and its applications. Vincent Laban, CFO of VSPARTICLE, explains: “Due to its immense investments in IOT, sensors, semiconductor technology, renewable energy and healthcare applications, China will eventually become one of our biggest markets. The collaboration with Bio-Sun offers a valuable opportunity to enter the Chinese market at exactly the right time.”

NANOPARTICLES ARE THE BUILDING BLOCKS OF THE FUTURE

Increasingly, scientists are focusing on nanoparticles as a key technology in enabling the transition to a sustainable future. Nanoparticles are used to make new types of sensors and smart electronics; provide new imaging and treatment possibilities in healthcare; and reduce harmful waste in chemical processes.

CURRENT RESEARCH TOOLKIT LACKS A FAST WAY FOR MAKING SPECIFIC BUILDING BLOCKS

With the latest tools in nanotechnology, researchers are exploring the possibilities of building novel materials. This is, however, a trial-and-error method. Getting the right nanoparticles often is a slow struggle, as most production methods take a substantial amount of effort and time to develop.

VSPARTICLE’S VSP-G1 NANOPARTICLE GENERATOR

With the VSP-G1 nanoparticle generator, VSPARTICLE makes the production of nanoparticles as easy as pushing a button. . Easy and fast iterations enable researchers to fast forward their research cycle, and verify their hypotheses.

VSPARTICLE

Born out of the research labs of Delft University of Technology, with over 20 years of experience in the synthesis of aerosol, VSPARTICLE believes there is a whole new world of possibilities and materials at the nanoscale. The company was founded in 2014 and has an international sales network in Europe, Japan and China.

BIO-SUN

Bio-Sun was founded in Beijing in 2010 and is a leader in promoting nanotechnology and biotechnology instruments in China. It serves many renowned customers in life science, drug discovery and material science. Bio-Sun has four branch offices in Qingdao, Shanghai, Guangzhou and Wuhan City, and a nationwide sale network.

That’s all folks!

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Meet Pepper, a robot for health care clinical settings

A Canadian project to introduce robots like Pepper into clinical settings (aside: can seniors’ facilities be far behind?) is the subject of a June 23, 2017 news item on phys.org,

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

A June 22, 2017 McMaster University news release, which originated the news item, provides more detail,

With the help of Softbank’s humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster’s Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients’ behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the [Canada] Science and Technology Museum in Ottawa.

“Pepper will help us highlight some very important aspects and motives of human behaviour and communication,” said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, ‘read’ emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: “We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients’ communications needs.”

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD’s Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

“This partnership is a testament to the collaborative nature of innovation,” said dean of FCAD, Charles Falzon. “I’m thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom.”

“This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care,” says McMaster’s Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and [Southern Ontario Smart Computing Innovation Platform] SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

“We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations,” said Harris Smith.

I just went to a presentation at the facility where my mother lives and it was all about delivering more individualized and better care for residents. Given that most seniors in British Columbia care facilities do not receive the number of service hours per resident recommended by the province due to funding issues, it seemed a well-meaning initiative offered in the face of daunting odds against success. Now with this news, I wonder what impact ‘Pepper’ might ultimately have on seniors and on the people who currently deliver service. Of course, this assumes that researchers will be able to tackle problems with understanding various accents and communication strategies, which are strongly influenced by culture and, over time, the aging process.

After writing that last paragraph I stumbled onto this June 27, 2017 Sage Publications press release on EurekAlert about a related matter,

Existing digital technologies must be exploited to enable a paradigm shift in current healthcare delivery which focuses on tests, treatments and targets rather than the therapeutic benefits of empathy. Writing in the Journal of the Royal Society of Medicine, Dr Jeremy Howick and Dr Sian Rees of the Oxford Empathy Programme, say a new paradigm of empathy-based medicine is needed to improve patient outcomes, reduce practitioner burnout and save money.

Empathy-based medicine, they write, re-establishes relationship as the heart of healthcare. “Time pressure, conflicting priorities and bureaucracy can make practitioners less likely to express empathy. By re-establishing the clinical encounter as the heart of healthcare, and exploiting available technologies, this can change”, said Dr Howick, a Senior Researcher in Oxford University’s Nuffield Department of Primary Care Health Sciences.

Technology is already available that could reduce the burden of practitioner paperwork by gathering basic information prior to consultation, for example via email or a mobile device in the waiting room.

During the consultation, the computer screen could be placed so that both patient and clinician can see it, a help to both if needed, for example, to show infographics on risks and treatment options to aid decision-making and the joint development of a treatment plan.

Dr Howick said: “The spread of alternatives to face-to-face consultations is still in its infancy, as is our understanding of when a machine will do and when a person-to-person relationship is needed.” However, he warned, technology can also get in the way. A computer screen can become a barrier to communication rather than an aid to decision-making. “Patients and carers need to be involved in determining the need for, and designing, new technologies”, he said.

I sincerely hope that the Canadian project has taken into account some of the issues described in the ’empathy’ press release and in the article, which can be found here,

Overthrowing barriers to empathy in healthcare: empathy in the age of the Internet
by J Howick and S Rees. Journaly= of the Royal Society of Medicine Article first published online: June 27, 2017 DOI: https://doi.org/10.1177/0141076817714443

This article is open access.

Hacking the human brain with a junction-based artificial synaptic device

Earlier today I published a piece featuring Dr. Wei Lu’s work on memristors and the movement to create an artificial brain (my June 28, 2017 posting: Dr. Wei Lu and bio-inspired ‘memristor’ chips). For this posting I’m featuring a non-memristor (if I’ve properly understood the technology) type of artificial synapse. From a June 28, 2017 news item on Nanowerk,

One of the greatest challenges facing artificial intelligence development is understanding the human brain and figuring out how to mimic it.

Now, one group reports in ACS Nano (“Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device”) that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system — the release of inhibitory and stimulatory signals from the same “pre-synaptic” terminal.

Unfortunately, the American Chemical Society news release on EurekAlert, which originated the news item, doesn’t provide too much more detail,

The human nervous system is made up of over 100 trillion synapses, structures that allow neurons to pass electrical and chemical signals to one another. In mammals, these synapses can initiate and inhibit biological messages. Many synapses just relay one type of signal, whereas others can convey both types simultaneously or can switch between the two. To develop artificial intelligence systems that better mimic human learning, cognition and image recognition, researchers are imitating synapses in the lab with electronic components. Most current artificial synapses, however, are only capable of delivering one type of signal. So, Han Wang, Jing Guo and colleagues sought to create an artificial synapse that can reconfigurably send stimulatory and inhibitory signals.

The researchers developed a synaptic device that can reconfigure itself based on voltages applied at the input terminal of the device. A junction made of black phosphorus and tin selenide enables switching between the excitatory and inhibitory signals. This new device is flexible and versatile, which is highly desirable in artificial neural networks. In addition, the artificial synapses may simplify the design and functions of nervous system simulations.

Here’s how I concluded that this is not a memristor-type device (from the paper [first paragraph, final sentence]; a link and citation will follow; Note: Links have been removed)),

The conventional memristor-type [emphasis mine](14-20) and transistor-type(21-25) artificial synapses can realize synaptic functions in a single semiconductor device but lacks the ability [emphasis mine] to dynamically reconfigure between excitatory and inhibitory responses without the addition of a modulating terminal.

Here’s a link to and a citation for the paper,

Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device by
He Tian, Xi Cao, Yujun Xie, Xiaodong Yan, Andrew Kostelec, Don DiMarzio, Cheng Chang, Li-Dong Zhao, Wei Wu, Jesse Tice, Judy J. Cha, Jing Guo, and Han Wang. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b03033 Publication Date (Web): June 28, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

May/June 2017 scienceish events in Canada (mostly in Vancouver)

I have five* events for this posting

(1) Science and You (Montréal)

The latest iteration of the Science and You conference took place May 4 – 6, 2017 at McGill University (Montréal, Québec). That’s the sad news, the good news is that they have recorded and released the sessions onto YouTube. (This is the first time the conference has been held outside of Europe, in fact, it’s usually held in France.) Here’s why you might be interested (from the 2017 conference page),

The animator of the conference will be Véronique Morin:

Véronique Morin is science journalist and communicator, first president of the World Federation of Science Journalists (WFSJ) and serves as judge for science communication awards. She worked for a science program on Quebec’s public TV network, CBCRadio-Canada, TVOntario, and as a freelancer is also a contributor to -among others-  The Canadian Medical Journal, University Affairs magazine, NewsDeeply, while pursuing documentary projects.

Let’s talk about S …

Holding the attention of an audience full of teenagers may seem impossible… particularly on topics that might be seen as boring, like sciences! Yet, it’s essential to demistify science in order to make it accessible, even appealing in the eyes of futur citizens.
How can we encourage young adults to ask themselves questions about the surrounding world, nature and science? How can we make them discover sciences with and without digital tools?

Find out tips and tricks used by our speakers Kristin Alford and Amanda Tyndall.

Kristin Alford
Dr Kristin Alford is a futurist and the inaugural Director of MOD., a futuristic museum of discovery at the University of South Australia. Her mind is presently occupied by the future of work and provoking young adults to ask questions about the role of science at the intersection of art and innovation.

Internet Website

Amanda Tyndall
Over 20 years of  science communication experience with organisations such as Café Scientifique, The Royal Institution of Great Britain (and Australia’s Science Exchange), the Science Museum in London and now with the Edinburgh International Science Festival. Particularly interested in engaging new audiences through linkages with the arts and digital/creative industries.

Internet Website

A troll in the room

Increasingly used by politicians, social media can reach thousand of people in few seconds. Relayed to infinity, the message seems truthful, but is it really? At a time of fake news and alternative facts, how can we, as a communicator or a journalist, take up the challenge of disinformation?
Discover the traps and tricks of disinformation in the age of digital technologies with our two fact-checking experts, Shawn Otto and Vanessa Schipani, who will offer concrete solutions to unravel the true from the false..

 

Shawn Otto
Shawn Otto was awarded the IEEE-USA (“I-Triple-E”) National Distinguished Public Service Award for his work elevating science in America’s national public dialogue. He is cofounder and producer of the US presidential science debates at ScienceDebate.org. He is also an award-winning screenwriter and novelist, best known for writing and co-producing the Academy Award-nominated movie House of Sand and Fog.

Vanessa Schipani
Vanessa is a science journalist at FactCheck.org, which monitors U.S. politicians’ claims for accuracy. Previously, she wrote for outlets in the U.S., Europe and Japan, covering topics from quantum mechanics to neuroscience. She has bachelor’s degrees in zoology and philosophy and a master’s in the history and philosophy of science.

At 20,000 clicks from the extreme

Sharing living from a space station, ship or submarine. The examples of social media use in extreme conditions are multiplying and the public is asking for more. How to use public tools to highlight practices and discoveries? How to manage the use of social networks of a large organisation? What pitfalls to avoid? What does this mean for citizens and researchers?
Find out with Phillipe Archambault and Leslie Elliott experts in extrem conditions.

Philippe Archambault

Professor Philippe Archambault is a marine ecologist at Laval University, the director of the Notre Golfe network and president of the 4th World Conference on Marine Biodiversity. His research on the influence of global changes on biodiversity and the functioning of ecosystems has led him to work in all four corners of our oceans from the Arctic to the Antarctic, through Papua New Guinea and the French Polynesia.

Website

Leslie Elliott

Leslie Elliott leads a team of communicators at Ocean Networks Canada in Victoria, British Columbia, home to Canada’s world-leading ocean observatories in the Pacific and Arctic Oceans. Audiences can join robots equipped with high definition cameras via #livedive to discover more about our ocean.

Website

Science is not a joke!

Science and humor are two disciplines that might seem incompatible … and yet, like the ig-Nobels, humour can prove to be an excellent way to communicate a scientific message. This, however, can prove to be quite challenging since one needs to ensure they employ the right tone and language to both captivate the audience while simultaneously communicating complex topics.

Patrick Baud and Brian Malow, both well-renowned scientific communicators, will give you with the tools you need to capture your audience and also convey a proper scientific message. You will be surprised how, even in Science, a good dose of humour can make you laugh and think.

Patrick Baud
Patrick Baud is a French author who was born on June 30, 1979, in Avignon. He has been sharing for many years his passion for tales of fantasy, and the marvels and curiosities of the world, through different media: radio, web, novels, comic strips, conferences, and videos. His YouTube channel “Axolot”, was created in 2013, and now has over 420,000 followers.

Internet Website
Youtube

Brian Malow
Brian Malow is Earth’s Premier Science Comedian (self-proclaimed).  Brian has made science videos for Time Magazine and contributed to Neil deGrasse Tyson’s radio show.  He worked in science communications at a museum, blogged for Scientific American, and trains scientists to be better communicators.

Internet Website
YouTube

I don’t think they’ve managed to get everything up on YouTube yet but the material I’ve found has been subtitled (into French or English, depending on which language the speaker used).

Here are the opening day’s talks on YouTube with English subtitles or French subtitles when appropriate. You can also find some abstracts for the panel presentations here. I was particularly in this panel (S3 – The Importance of Reaching Out to Adults in Scientific Culture), Note: I have searched out the French language descriptions for those unavailable in English,

Organized by Coeur des sciences, Université du Québec à Montréal (UQAM)
Animator: Valérie Borde, Freelance Science Journalist

Anouk Gingras, Musée de la civilisation, Québec
Text not available in English

[La science au Musée de la civilisation c’est :
• Une cinquantaine d’expositions et espaces découvertes
• Des thèmes d’actualité, liés à des enjeux sociaux, pour des exposition souvent destinées aux adultes
• Un potentiel de nouveaux publics en lien avec les autres thématiques présentes au Musée (souvent non scientifiques)
L’exposition Nanotechnologies : l’invisible révolution :
• Un thème d’actualité suscitant une réflexion
• Un sujet sensible menant à la création d’un parcours d’exposition polarisé : choix entre « oui » ou « non » au développement des nanotechnologies pour l’avenir
• L’utilisation de divers éléments pour rapprocher le sujet du visiteur

  • Les nanotechnologies dans la science-fiction
  • Les objets du quotidien contenant des nanoparticules
  • Les objets anciens qui utilisant les nanotechnologies
  • Divers microscopes retraçant l’histoire des nanotechnologies

• Une forme d’interaction suscitant la réflexion du visiteur via un objet sympatique : le canard  de plastique jaune, muni d’une puce RFID

  • Sept stations de consultation qui incitent le visiteur à se prononcer et à réfléchir sur des questions éthiques liées au développement des nanotechnologies
  • Une compilation des données en temps réel
  • Une livraison des résultats personnalisée
  • Une mesure des visiteurs dont l’opinion s’est modifiée à la suite de la visite de l’exposition

Résultats de fréquentation :
• Public de jeunes adultes rejoint (51%)
• Plus d’hommes que de femmes ont visité l’exposition
• Parcours avec canard: incite à la réflexion et augmente l’attention
• 3 visiteurs sur 4 prennent le canard; 92% font l’activité en entier]

Marie Lambert-Chan, Québec Science
Capting the attention of adult readership : challenging mission, possible mission
Since 1962, Québec Science Magazine is the only science magazine aimed at an adult readership in Québec. Our mission : covering topical subjects related to science and technology, as well as social issues from a scientific point of view. Each year, we print eight issues, with a circulation of 22,000 copies. Furthermore, the magazine has received several awards and accolades. In 2017, Québec Science Magazine was honored by the Canadian Magazine Awards/Grands Prix du Magazine and was named Best Magazine in Science, Business and Politics category.
Although we have maintained a solid reputation among scientists and the media industry, our magazine is still relatively unknown to the general public. Why is that ? How is it that, through all those years, we haven’t found the right angle to engage a broader readership ?
We are still searching for definitive answers, but here are our observations :
Speaking science to adults is much more challenging than it is with children, who can marvel endlessly at the smallest things. Unfortunately, adults lose this capacity to marvel and wonder for various reasons : they have specific interests, they failed high-school science, they don’t feel competent enough to understand scientific phenomena. How do we bring the wonder back ? This is our mission. Not impossible, and hopefully soon to be accomplished. One noticible example is the number of reknown scientists interviewed during the popular talk-show Tout le monde en parle, leading us to believe the general public may have an interest in science.
However, to accomplish our mission, we have to recount science. According to the Bulgarian writer and blogger Maria Popova, great science writing should explain, elucidate and enchant . To explain : to make the information clear and comprehensible. To elucidate : to reveal all the interconnections between the pieces of information. To enchant : to go beyond the scientific terms and information and tell a story, thus giving a kaleidoscopic vision of the subject. This is how we intend to capture our readership’s attention.
Our team aims to accomplish this challenge. Although, to be perfectly honest, it would be much easier if we had more resources, financial-wise or human-wise. However, we don’t lack ideas. We dream of major scientific investigations, conferences organized around themes from the magazine’s issues, Web documentaries, podcasts… Such initiatives would give us the visibility we desperately crave.
That said, even in the best conditions, would be have more subscribers ? Perhaps. But it isn’t assured. Even if our magazine is aimed at adult readership, we are convinced that childhood and science go hand in hand, and is even decisive for the children’s future. At the moment, school programs are not in place for continuous scientific development. It is possible to develop an interest for scientific culture as adults, but it is much easier to achieve this level of curiosity if it was previously fostered.

Robert Lamontagne, Université de Montréal
Since the beginning of my career as an astrophysicist, I have been interested in scientific communication to non-specialist audiences. I have presented hundreds of lectures describing the phenomena of the cosmos. Initially, these were mainly offered in amateur astronomers’ clubs or in high-schools and Cégeps. Over the last few years, I have migrated to more general adult audiences in the context of cultural activities such as the “Festival des Laurentides”, the Arts, Culture and Society activities in Repentigny and, the Université du troisième âge (UTA) or Senior’s University.
The Quebec branch of the UTA, sponsored by the Université de Sherbrooke (UdeS), exists since 1976. Seniors universities, created in Toulouse, France, are part of a worldwide movement. The UdeS and its senior’s university antennas are members of the International Association of the Universities of the Third Age (AIUTA). The UTA is made up of 28 antennas located in 10 regions and reaches more than 10,000 people per year. Antenna volunteers prepare educational programming by drawing on a catalog of courses, seminars and lectures, covering a diverse range of subjects ranging from history and politics to health, science, or the environment.
The UTA is aimed at people aged 50 and over who wish to continue their training and learn throughout their lives. It is an attentive, inquisitive, educated public and, given the demographics in Canada, its number is growing rapidly. This segment of the population is often well off and very involved in society.
I usually use a two-prong approach.
• While remaining rigorous, the content is articulated around a few ideas, avoiding analytical expressions in favor of a qualitative description.
• The narrative framework, the story, which allows to contextualize the scientific content and to forge links with the audience.

Sophie Malavoy, Coeur des sciences – UQAM

Many obstacles need to be overcome in order to reach out to adults, especially those who aren’t in principle interested in science.
• Competing against cultural activities such as theater, movies, etc.
• The idea that science is complex and dull
• A feeling of incompetence. « I’ve always been bad in math and physics»
• Funding shortfall for activities which target adults
How to reach out to those adults?
• To put science into perspective. To bring its relevance out by making links with current events and big issues (economic, heath, environment, politic). To promote a transdisciplinary approach which includes humanities and social sciences.
• To stake on originality by offering uncommon and ludic experiences (scientific walks in the city, street performances, etc.)
• To bridge between science and popular activities to the public (science/music; science/dance; science/theater; science/sports; science/gastronomy; science/literature)
• To reach people with emotions without sensationalism. To boost their curiosity and ability to wonder.
• To put a human face on science, by insisting not only on the results of a research but on its process. To share the adventure lived by researchers.
• To liven up people’s feeling of competence. To insist on the scientific method.
• To invite non-scientists (citizens groups, communities, consumers, etc.) to the reflections on science issues (debate, etc.).  To move from dissemination of science to dialog

Didier Pourquery, The Conversation France
Text not available in English

[Depuis son lancement en septembre 2015 la plateforme The Conversation France (2 millions de pages vues par mois) n’a cessé de faire progresser son audience. Selon une étude menée un an après le lancement, la structure de lectorat était la suivante
Pour accrocher les adultes et les ainés deux axes sont intéressants ; nous les utilisons autant sur notre site que sur notre newsletter quotidienne – 26.000 abonnés- ou notre page Facebook (11500 suiveurs):
1/ expliquer l’actualité : donner les clefs pour comprendre les débats scientifiques qui animent la société ; mettre de la science dans les discussions (la mission du site est de  « nourrir le débat citoyen avec de l’expertise universitaire et de la recherche »). L’idée est de poser des questions de compréhension simple au moment où elles apparaissent dans le débat (en période électorale par exemple : qu’est-ce que le populisme ? Expliqué par des chercheurs de Sciences Po incontestables.)
Exemples : comprendre les conférences climat -COP21, COP22 – ; comprendre les débats de société (Gestation pour autrui); comprendre l’économie (revenu universel); comprendre les maladies neurodégénératives (Alzheimer) etc.
2/ piquer la curiosité : utiliser les formules classiques (le saviez-vous ?) appliquées à des sujets surprenants (par exemple : «  Que voit un chien quand il regarde la télé ? » a eu 96.000 pages vues) ; puis jouer avec ces articles sur les réseaux sociaux. Poser des questions simples et surprenantes. Par exemple : ressemblez-vous à votre prénom ? Cet article académique très sérieux a comptabilisé 95.000 pages vues en français et 171.000 en anglais.
3/ Susciter l’engagement : faire de la science participative simple et utile. Par exemple : appeler nos lecteurs à surveiller l’invasion de moustiques tigres partout sur le territoire. Cet article a eu 112.000 pages vues et a été republié largement sur d’autres sites. Autre exemple : appeler les lecteurs à photographier les punaises de leur environnement.]

Here are my very brief and very rough translations. (1) Anouk Gingras is focused largely on a nanotechnology exhibit and whether or not visitors went through it and participated in various activities. She doesn’t seem specifically focused on science communication for adults but they are doing some very interesting and related work at Québec’s Museum of Civilization. (2) Didier Pourquery is describing an online initiative known as ‘The Conversation France’ (strange—why not La conversation France?). Moving on, there’s a website with a daily newsletter (blog?) and a Facebook page. They have two main projects, one is a discussion of current science issues in society, which is informed with and by experts but is not exclusive to experts, and more curiosity-based science questions and discussion such as What does a dog see when it watches television?

Serendipity! I hadn’t stumbled across this conference when I posted my May 12, 2017 piece on the ‘insanity’ of science outreach in Canada. It’s good to see I’m not the only one focused on science outreach for adults and that there is some action, although seems to be a Québec-only effort.

(2) Ingenious—a book launch in Vancouver

The book will be launched on Thursday, June 1, 2017 at the Vancouver Public Library’s Central Branch (from the Ingenious: An Evening of Canadian Innovation event page)

Ingenious: An Evening of Canadian Innovation
Thursday, June 1, 2017 (6:30 pm – 8:00 pm)
Central Branch
Description

Gov. Gen. David Johnston and OpenText Corp. chair Tom Jenkins discuss Canadian innovation and their book Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier.

Books will be available for purchase and signing.

Doors open at 6 p.m.

INGENIOUS : HOW CANADIAN INNOVATORS MADE THE WORLD SMARTER, SMALLER, KINDER, SAFER, HEALTHIER, WEALTHIER, AND HAPPIER

Address:

350 West Georgia St.
VancouverV6B 6B1

Get Directions

  • Phone:

Location Details:

Alice MacKay Room, Lower Level

I do have a few more details about the authors and their book. First, there’s this from the Ottawa Writer’s Festival March 28, 2017 event page,

To celebrate Canada’s 150th birthday, Governor General David Johnston and Tom Jenkins have crafted a richly illustrated volume of brilliant Canadian innovations whose widespread adoption has made the world a better place. From Bovril to BlackBerrys, lightbulbs to liquid helium, peanut butter to Pablum, this is a surprising and incredibly varied collection to make Canadians proud, and to our unique entrepreneurial spirit.

Successful innovation is always inspired by at least one of three forces — insight, necessity, and simple luck. Ingenious moves through history to explore what circumstances, incidents, coincidences, and collaborations motivated each great Canadian idea, and what twist of fate then brought that idea into public acceptance. Above all, the book explores what goes on in the mind of an innovator, and maps the incredible spectrum of personalities that have struggled to improve the lot of their neighbours, their fellow citizens, and their species.

From the marvels of aboriginal invention such as the canoe, snowshoe, igloo, dogsled, lifejacket, and bunk bed to the latest pioneering advances in medicine, education, philanthropy, science, engineering, community development, business, the arts, and the media, Canadians have improvised and collaborated their way to international admiration. …

Then, there’s this April 5, 2017 item on Canadian Broadcasting Corporation’s (CBC) news online,

From peanut butter to the electric wheelchair, the stories behind numerous life-changing Canadian innovations are detailed in a new book.

Gov. Gen. David Johnston and Tom Jenkins, chair of the National Research Council and former CEO of OpenText, are the authors of Ingenious: How Canadian Innovators Made the World Smarter, Smaller, Kinder, Safer, Healthier, Wealthier and Happier. The authors hope their book reinforces and extends the culture of innovation in Canada.

“We started wanting to tell 50 stories of Canadian innovators, and what has amazed Tom and myself is how many there are,” Johnston told The Homestretch on Wednesday. The duo ultimately chronicled 297 innovations in the book, including the pacemaker, life jacket and chocolate bars.

“Innovations are not just technological, not just business, but they’re social innovations as well,” Johnston said.

Many of those innovations, and the stories behind them, are not well known.

“We’re sort of a humble people,” Jenkins said. “We’re pretty quiet. We don’t brag, we don’t talk about ourselves very much, and so we then lead ourselves to believe as a culture that we’re not really good inventors, the Americans are. And yet we knew that Canadians were actually great inventors and innovators.”

‘Opportunities and challenges’

For Johnston, his favourite story in the book is on the light bulb.

“It’s such a symbol of both our opportunities and challenges,” he said. “The light bulb was invented in Canada, not the United States. It was two inventors back in the 1870s that realized that if you passed an electric current through a resistant metal it would glow, and they patented that, but then they didn’t have the money to commercialize it.”

American inventor Thomas Edison went on to purchase that patent and made changes to the original design.

Johnston and Jenkins are also inviting readers to share their own innovation stories, on the book’s website.

I’m looking forward to the talk and wondering if they’ve included the botox and cellulose nanocrystal (CNC) stories to the book. BTW, Tom Jenkins was the chair of a panel examining Canadian research and development and lead author of the panel’s report (Innovation Canada: A Call to Action) for the then Conservative government (it’s also known as the Jenkins report). You can find out more about in my Oct. 21, 2011 posting.

(3) Made in Canada (Vancouver)

This is either fortuitous or there’s some very high level planning involved in the ‘Made in Canada; Inspiring Creativity and Innovation’ show which runs from April 21 – Sept. 4, 2017 at Vancouver’s Science World (also known as the Telus World of Science). From the Made in Canada; Inspiring Creativity and Innovation exhibition page,

Celebrate Canadian creativity and innovation, with Science World’s original exhibition, Made in Canada, presented by YVR [Vancouver International Airport] — where you drive the creative process! Get hands-on and build the fastest bobsled, construct a stunning piece of Vancouver architecture and create your own Canadian sound mashup, to share with friends.

Vote for your favourite Canadian inventions and test fly a plane of your design. Discover famous (and not-so-famous, but super neat) Canadian inventions. Learn about amazing, local innovations like robots that teach themselves, one-person electric cars and a computer that uses parallel universes.

Imagine what you can create here, eh!!

You can find more information here.

One quick question, why would Vancouver International Airport be presenting this show? I asked that question of Science World’s Communications Coordinator, Jason Bosher, and received this response,

 YVR is the presenting sponsor. They donated money to the exhibition and they also contributed an exhibit for the “We Move” themed zone in the Made in Canada exhibition. The YVR exhibit details the history of the YVR airport, it’s geographic advantage and some of the planes they have seen there.

I also asked if there was any connection between this show and the ‘Ingenious’ book launch,

Some folks here are aware of the book launch. It has to do with the Canada 150 initiative and nothing to do with the Made in Canada exhibition, which was developed here at Science World. It is our own original exhibition.

So there you have it.

(4) Robotics, AI, and the future of work (Ottawa)

I’m glad to finally stumble across a Canadian event focusing on the topic of artificial intelligence (AI), robotics and the future of work. Sadly (for me), this is taking place in Ottawa. Here are more details  from the May 25, 2017 notice (received via email) from the Canadian Science Policy Centre (CSPC),

CSPC is Partnering with CIFAR {Canadian Institute for Advanced Research]
The Second Annual David Dodge Lecture

Join CIFAR and Senior Fellow Daron Acemoglu for
the Second Annual David Dodge CIFAR Lecture in Ottawa on June 13.
June 13, 2017 | 12 – 2 PM [emphasis mine]
Fairmont Château Laurier, Drawing Room | 1 Rideau St, Ottawa, ON
Along with the backlash against globalization and the outsourcing of jobs, concern is also growing about the effect that robotics and artificial intelligence will have on the labour force in advanced industrial nations. World-renowned economist Acemoglu, author of the best-selling book Why Nations Fail, will discuss how technology is changing the face of work and the composition of labour markets. Drawing on decades of data, Acemoglu explores the effects of widespread automation on manufacturing jobs, the changes we can expect from artificial intelligence technologies, and what responses to these changes might look like. This timely discussion will provide valuable insights for current and future leaders across government, civil society, and the private sector.

Daron Acemoglu is a Senior Fellow in CIFAR’s Insitutions, Organizations & Growth program, and the Elizabeth and James Killian Professor of Economics at the Massachusetts Institute of Technology.

Tickets: $15 (A light lunch will be served.)

You can find a registration link here. Also, if you’re interested in the Canadian efforts in the field of artificial intelligence you can find more in my March 24, 2017 posting (scroll down about 25% of the way and then about 40% of the way) on the 2017 Canadian federal budget and science where I first noted the $93.7M allocated to CIFAR for launching a Pan-Canadian Artificial Intelligence Strategy.

(5) June 2017 edition of the Curiosity Collider Café (Vancouver)

This is an art/science (also known called art/sci and SciArt) that has taken place in Vancouver every few months since April 2015. Here’s more about the June 2017 edition (from the Curiosity Collider events page),

Collider Cafe

When
8:00pm on Wednesday, June 21st, 2017. Door opens at 7:30pm.

Where
Café Deux Soleils. 2096 Commercial Drive, Vancouver, BC (Google Map).

Cost
$5.00-10.00 cover at the door (sliding scale). Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.

***

#ColliderCafe is a space for artists, scientists, makers, and anyone interested in art+science. Meet, discover, connect, create. How do you explore curiosity in your life? Join us and discover how our speakers explore their own curiosity at the intersection of art & science.

The event will start promptly at 8pm (doors open at 7:30pm). $5.00-10.00 (sliding scale) cover at the door. Proceeds will be used to cover the cost of running this event, and to fund future Curiosity Collider events. Curiosity Collider is a registered BC non-profit organization.

Enjoy!

*I changed ‘three’ events to ‘five’ events and added a number to each event for greater reading ease on May 31, 2017.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.