Tag Archives: artificial intelligence

China is world leader in nanotechnology and in other fields too?

State of Chinese nanoscience/nanotechnology

China claims to be the world leader in the field in a white paper announced in an August 29, 2017 Springer Nature press release,

Springer Nature, the National Center for Nanoscience and Technology, China and the National Science Library of the Chinese Academy of Sciences (CAS) released in both Chinese and English a white paper entitled “Small Science in Big China: An overview of the state of Chinese nanoscience and technology” at NanoChina 2017, an international conference on nanoscience and technology held August 28 and 29 in Beijing. The white paper looks at the rapid growth of China’s nanoscience research into its current role as the world’s leader [emphasis mine], examines China’s strengths and challenges, and makes some suggestions for how its contribution to the field can continue to thrive.

The white paper points out that China has become a strong contributor to nanoscience research in the world, and is a powerhouse of nanotechnology R&D. Some of China’s basic research is leading the world. China’s applied nanoscience research and the industrialization of nanotechnologies have also begun to take shape. These achievements are largely due to China’s strong investment in nanoscience and technology. China’s nanoscience research is also moving from quantitative increase to quality improvement and innovation, with greater emphasis on the applications of nanotechnologies.

“China took an initial step into nanoscience research some twenty years ago, and has since grown its commitment at an unprecedented rate, as it has for scientific research as a whole. Such a growth is reflected both in research quantity and, importantly, in quality. Therefore, I regard nanoscience as a window through which to observe the development of Chinese science, and through which we could analyze how that rapid growth has happened. Further, the experience China has gained in developing nanoscience and related technologies is a valuable resource for the other countries and other fields of research to dig deep into and draw on,” said Arnout Jacobs, President, Greater China, Springer Nature.

The white paper explores at China’s research output relative to the rest of the world in terms of research paper output, research contribution contained in the Nano database, and finally patents, providing insight into China’s strengths and expertise in nano research. The white paper also presents the results of a survey of experts from the community discussing the outlook for and challenges to the future of China’s nanoscience research.

China nano research output: strong rise in quantity and quality

In 1997, around 13,000 nanoscience-related papers were published globally. By 2016, this number had risen to more than 154,000 nano-related research papers. This corresponds to a compound annual growth rate of 14% per annum, almost four times the growth in publications across all areas of research of 3.7%. Over the same period of time, the nano-related output from China grew from 820 papers in 1997 to over 52,000 papers in 2016, a compound annual growth rate of 24%.

China’s contribution to the global total has been growing steadily. In 1997, Chinese researchers co-authored just 6% of the nano-related papers contained in the Science Citation Index (SCI). By 2010, this grew to match the output of the United States. They now contribute over a third of the world’s total nanoscience output — almost twice that of the United States.

Additionally, China’s share of the most cited nanoscience papers has kept increasing year on year, with a compound annual growth rate of 22% — more than three times the global rate. It overtook the United States in 2014 and its contribution is now many times greater than that of any other country in the world, manifesting an impressive progression in both quantity and quality.

The rapid growth of nanoscience in China has been enabled by consistent and strong financial support from the Chinese government. As early as 1990, the State Science and Technology Committee, the predecessor of the Ministry of Science and Technology (MOST), launched the Climbing Up project on nanomaterial science. During the 1990s, the National Natural Science Foundation of China (NSFC) also funded nearly 1,000 small-scale projects in nanoscience. In the National Guideline on Medium- and Long-Term Program for Science and Technology Development (for 2006−2020) issued in early 2006 by the Chinese central government, nanoscience was identified as one of four areas of basic research and received the largest proportion of research budget out of the four areas. The brain boomerang, with more and more foreign-trained Chinese researchers returning from overseas, is another contributor to China’s rapid rise in nanoscience.

The white paper clarifies the role of Chinese institutions, including CAS, in driving China’s rise to become the world’s leader in nanoscience. Currently, CAS is the world’s largest producer of high impact nano research, contributing more than twice as many papers in the 1% most-cited nanoscience literature than its closest competitors. In addition to CAS, five other Chinese institutions are ranked among the global top 20 in terms of output of top cited 1% nanoscience papers — Tsinghua University, Fudan University, Zhejiang University, University of Science and Technology of China and Peking University.

Nano database reveals advantages and focus of China’s nano research

The Nano database (http://nano.nature.com) is a comprehensive platform that has been recently developed by Nature Research – part of Springer Nature – which contains nanoscience-related papers published in 167 peer-reviewed journals including Advanced Materials, Nano Letters, Nature, Science and more. Analysis of the Nano database of nanomaterial-containing articles published in top 30 journals during 2014–2016 shows that Chinese scientists explore a wide range of nanomaterials, the five most common of which are nanostructured materials, nanoparticles, nanosheets, nanodevices and nanoporous materials.

In terms of the research of applications, China has a clear leading edge in catalysis research, which is the most popular area of the country’s quality nanoscience papers. Chinese nano researchers also contributed significantly to nanomedicine and energy-related applications. China is relatively weaker in nanomaterials for electronics applications, compared to other research powerhouses, but robotics and lasers are emerging applications areas of nanoscience in China, and nanoscience papers addressing photonics and data storage applications also see strong growth in China. Over 80% of research from China listed in the database explicitly mentions applications of the nanostructures and nanomaterials described, notably higher than from most other leading nations such as the United States, Germany, the UK, Japan and France.

Nano also reveals the extent of China’s international collaborations in nano research. China has seen the percentage of its internationally collaborated papers increasing from 36% in 2014 to 44% in 2016. This level of international collaboration, similar to that of South Korea, is still much lower than that of the western countries, and the rate of growth is also not as fast as those in the United States, France and Germany.

The United States is China’s biggest international collaborator, contributing to 55% of China’s internationally collaborated papers on nanoscience that are included in the top 30 journals in the Nano database. Germany, Australia and Japan follow in a descending order as China’s collaborators on nano-related quality papers.

China’s patent output: topping the world, mostly applied domestically

Analysis of the Derwent Innovation Index (DII) database of Clarivate Analytics shows that China’s accumulative total number of patent applications for the past 20 years, amounting to 209,344 applications, or 45% of the global total, is more than twice as many as that of the United States, the second largest contributor to nano-related patents. China surpassed the United States and ranked the top in the world since 2008.

Five Chinese institutions, including the CAS, Zhejiang University, Tsinghua University, Hon Hai Precision Industry Co., Ltd. and Tianjin University can be found among the global top 10 institutional contributors to nano-related patent applications. CAS has been at the top of the global rankings since 2008, with a total of 11,218 patent applications for the past 20 years. Interestingly, outside of China, most of the other big institutional contributors among the top 10 are commercial enterprises, while in China, research or academic institutions are leading in patent applications.

However, the number of nano-related patents China applied overseas is still very low, accounting for only 2.61% of its total patent applications for the last 20 years cumulatively, whereas the proportion in the United States is nearly 50%. In some European countries, including the UK and France, more than 70% of patent applications are filed overseas.

China has high numbers of patent applications in several popular technical areas for nanotechnology use, and is strongest in patents for polymer compositions and macromolecular compounds. In comparison, nano-related patent applications in the United States, South Korea and Japan are mainly for electronics or semiconductor devices, with the United States leading the world in the cumulative number of patents for semiconductor devices.

Outlook, opportunities and challenges

The white paper highlights that the rapid rise of China’s research output and patent applications has painted a rosy picture for the development of Chinese nanoscience, and in both the traditionally strong subjects and newly emerging areas, Chinese nanoscience shows great potential.

Several interviewed experts in the survey identify catalysis and catalytic nanomaterials as the most promising nanoscience area for China. The use of nanotechnology in the energy and medical sectors was also considered very promising.

Some of the interviewed experts commented that the industrial impact of China’s nanotechnology is limited and there is still a gap between nanoscience research and the industrialization of nanotechnologies. Therefore, they recommended that the government invest more in applied research to drive the translation of nanoscience research and find ways to encourage enterprises to invest more in R&D.

As more and more young scientists enter the field, the competition for research funding is becoming more intense. However, this increasing competition for funding was not found to concern most interviewed young scientists, rather, they emphasized that the soft environment is more important. They recommended establishing channels that allow the suggestions or creative ideas of the young to be heard. Also, some interviewed young researchers commented that they felt that the current evaluation system was geared towards past achievements or favoured overseas experience, and recommended the development of an improved talent selection mechanism to ensure a sustainable growth of China’s nanoscience.

I have taken a look at the white paper and found it to be well written. It also provides a brief but thorough history of nanotechnology/nanoscience even adding a bit of historical information that was new to me. As for the rest of the white paper, it relies on bibliometrics (number of published papers and number of citations) and number of patents filed to lay the groundwork for claiming Chinese leadership in nanotechnology. As I’ve stated many times before, these are problematic measures but as far as I can determine they are almost the only ones we have. Frankly, as a Canadian, it doesn’t much matter to me since Canada no matter how you slice or dice it is always in a lower tier relative to science leadership in major fields. It’s the Americans who might feel inclined to debate leadership with regard to nanotechnology and other major fields and I leave it to to US commentators to take up the cudgels should they be inclined. The big bonuses here are the history, the glimpse into the Chinese perspective on the field of nanotechnology/nanoscience, and the analysis of weaknesses and strengths.

Coming up fast on Google and Amazon

A November 16, 2017 article by Christina Bonnington for Slate explores the possibility that a Chinese tech giant, Baidu,  will provide Google and Amazon serious competition in their quests to dominate world markets (Note: Links have been removed,

raven_h
The company took a playful approach to the form—but it has functional reasons for the design, too. Baidu

 

One of the most interesting companies in tech right now isn’t based in Palo Alto, or San Francisco, or Seattle. Baidu, a Chinese company with headquarters in Beijing, is taking on America’s biggest and most innovative tech titans—with style.

Baidu, a titan in its own right, leapt onto the scene as a competitor to Google in the search engine space. Since then, the company, largely underappreciated here in the U.S., has focused on beefing up its artificial intelligence efforts. Former AI chief Andrew Ng, upon leaving the company in March, credited Baidu’s CEO Robin Li on being one of the first technology leaders to fully appreciate the value of deep learning. Baidu now has a 1,300 person AI group, and that investment in AI has helped the company catch up to older, more established companies like Google and Amazon—both in emerging spaces, such as autonomous vehicles, and in consumer tech, as its latest announcement shows.

On Thursday [November 16, 2017], Baidu debuted its entrants to the popular virtual assistant space: a connected speaker and two robots. Baidu aims for the speaker to compete against options such as Amazon’s Echo line, Google Home, and Apple HomePod. Inside, the $256 device will utilize Baidu’s DuerOS conversational artificial intelligence platform, which is already used in more than 100 different smart home brands’ products. DuerOS will let you use your voice to do things like ask the speaker for information, play music, or hail a cab. Called the Raven H, the speaker includes high-end audio components from Tymphany and a unique design jointly created by acquired startup Raven Tech and Swedish consumer electronics company Teenage Engineering.

While the focus is on exciting new technology products from Baidu, the subtext, such as it is, suggests US companies had best keep an eye on its Chinese competitor(s).

Dutch/Chinese partnership to produce nanoparticles at the touch of a button

Now back to China and nanotechnology leadership and the production of nanoparticles. This announcement was made in a November 17, 2017 news item on Azonano,

Delft University of Technology [Netherlands] spin-off VSPARTICLE enters the booming Chinese market with a radical technology that allows researchers to produce nanoparticles at the push of a button. VSPARTICLE’s nanoparticle generator uses atoms, the worlds’ smallest building blocks, to provide a controllable source of nanoparticles. The start-up from Delft signed a distribution agreement with Bio-Sun to make their VSP-G1 nanoparticle generator available in China.

A November 16, 2017 VSPARTICLE press release, which originated the news item,

“We are honoured to cooperate with VSPARTICLE and bring the innovative VSP-G1 nanoparticle generator into the Chinese market. The VSP-G1 will create new possibilities for researchers in catalysis, aerosol, healthcare and electronics,” says Yinghui Cai, CEO of Bio-Sun.

With an exponential growth in nanoparticle research in the last decade, China is one of the leading countries in the field of nanotechnology and its applications. Vincent Laban, CFO of VSPARTICLE, explains: “Due to its immense investments in IOT, sensors, semiconductor technology, renewable energy and healthcare applications, China will eventually become one of our biggest markets. The collaboration with Bio-Sun offers a valuable opportunity to enter the Chinese market at exactly the right time.”

NANOPARTICLES ARE THE BUILDING BLOCKS OF THE FUTURE

Increasingly, scientists are focusing on nanoparticles as a key technology in enabling the transition to a sustainable future. Nanoparticles are used to make new types of sensors and smart electronics; provide new imaging and treatment possibilities in healthcare; and reduce harmful waste in chemical processes.

CURRENT RESEARCH TOOLKIT LACKS A FAST WAY FOR MAKING SPECIFIC BUILDING BLOCKS

With the latest tools in nanotechnology, researchers are exploring the possibilities of building novel materials. This is, however, a trial-and-error method. Getting the right nanoparticles often is a slow struggle, as most production methods take a substantial amount of effort and time to develop.

VSPARTICLE’S VSP-G1 NANOPARTICLE GENERATOR

With the VSP-G1 nanoparticle generator, VSPARTICLE makes the production of nanoparticles as easy as pushing a button. . Easy and fast iterations enable researchers to fast forward their research cycle, and verify their hypotheses.

VSPARTICLE

Born out of the research labs of Delft University of Technology, with over 20 years of experience in the synthesis of aerosol, VSPARTICLE believes there is a whole new world of possibilities and materials at the nanoscale. The company was founded in 2014 and has an international sales network in Europe, Japan and China.

BIO-SUN

Bio-Sun was founded in Beijing in 2010 and is a leader in promoting nanotechnology and biotechnology instruments in China. It serves many renowned customers in life science, drug discovery and material science. Bio-Sun has four branch offices in Qingdao, Shanghai, Guangzhou and Wuhan City, and a nationwide sale network.

That’s all folks!

Memristors at Masdar

The Masdar Institute of Science and Technology (Abu Dhabi, United Arab Emirates; Masdar Institute Wikipedia entry) featured its work with memristors in an Oct. 1, 2017 Masdar Institute press release by Erica Solomon (for anyone who’s interested, I have a simple description of memristors and links to more posts about them after the press release),

Researchers Develop New Memristor Prototype Capable of Performing Complex Operations at High-Speed and Low Power, Could Lead to Advancements in Internet of Things, Portable Healthcare Sensing and other Embedded Technologies

Computer circuits in development at the Khalifa University of Science and Technology could make future computers much more compact, efficient and powerful thanks to advancements being made in memory technologies that combine processing and memory storage functions into one densely packed “memristor.”

Enabling faster, smaller and ultra-low-power computers with memristors could have a big impact on embedded technologies, which enable Internet of Things (IoT), artificial intelligence, and portable healthcare sensing systems, says Dr. Baker Mohammad, Associate Professor of Electrical and Computer Engineering. Dr. Mohammad co-authored a book on memristor technologies, which has just been released by Springer, a leading global scientific publisher of books and journals, with Class of 2017 PhD graduate Heba Abunahla. The book, titled Memristor Technology: Synthesis and Modeling for Sensing and Security Applications, provides readers with a single-source guide to fabricate, characterize and model memristor devices for sensing applications.

The pair also contributed to a paper on memristor research that was published in IEEE Transactions on Circuits and Systems I: Regular Papers earlier this month with Class of 2017 MSc graduate Muath Abu Lebdeh and Dr. Mahmoud Al-Qutayri, Professor of Electrical and Computer Engineering.PhD student Yasmin Halawani is also an active member of Dr. Mohammad’s research team.

Conventional computers rely on energy and time-consuming processes to move information back and forth between the computer central processing unit (CPU) and the memory, which are separately located. A memristor, which is an electrical resistor that remembers how much current flows through it, can bridge the gap between computation and storage. Instead of fetching data from the memory and sending that data to the CPU where it is then processed, memristors have the potential to store and process data simultaneously.

“Memristors allow computers to perform many operations at the same time without having to move data around, thereby reducing latency, energy requirements, costs and chip size,” Dr. Mohammad explained. “We are focused on extending the logic gate design of the current memristor architecture with one that leads to even greater reduction of latency, energy dissipation and size.”

Logic gates control an electronics logical operation on one or more binary inputs and typically produce a single binary output. That is why they are at the heart of what makes a computer work, allowing a CPU to carry out a given set of instructions, which are received as electrical signals, using one or a combination of the seven basic logical operations: AND, OR, NOT, XOR, XNOR, NAND and NOR.

The team’s latest work is aimed at advancing a memristor’s ability to perform a complex logic operation, known as the XNOR (Exclusive NOR) logic gate function, which is the most complex logic gate operation among the seven basic logic gates types.

Designing memristive logic gates is difficult, as they require that each electrical input and output be in the form of electrical resistance rather than electrical voltage.

“However, we were able to successfully design an XNOR logic gate prototype with a novel structure, by layering bipolar and unipolar memristor types in a novel heterogeneous structure, which led to a reduction in latency and energy consumption for a memristive XNOR logic circuit gate by 50% compared to state-of the art state full logic proposed by leading research institutes,” Dr. Mohammad revealed.

The team’s current work builds on five years of research in the field of memristors, which is expected to reach a market value of US$384 million by 2025, according to a recent report from Research and Markets. Up to now, the team has fabricated and characterized several memristor prototypes, assessing how different design structures influence efficiency and inform potential applications. Some innovative memristor technology applications the team discovered include machine vision, radiation sensing and diabetes detection. Two patents have already been issued by the US Patents and Trademark Office (USPTO) for novel memristor designs invented by the team, with two additional patents pending.

Their robust research efforts have also led to the publication of several papers on the technology in high impact journals, including The Journal of Physical Chemistry, Materials Chemistry and Physics, and IEEE TCAS. This strong technology base paved the way for undergraduate senior students Reem Aldahmani, Amani Alshkeili, and Reem Jassem Jaffar to build novel and efficient memristive sensing prototypes.

The memristor research is also set to get an additional boost thanks to the new University merger, which Dr. Mohammad believes could help expedite the team’s research and development efforts through convenient and continuous access to the wider range of specialized facilities and tools the new university has on offer.

The team’s prototype memristors are now in the laboratory prototype stage, and Dr. Mohammad plans to initiate discussions for internal partnership opportunities with the Khalifa University Robotics Institute, followed by external collaboration with leading semiconductor companies such as Abu Dhabi-owned GlobalFoundries, to accelerate the transfer of his team’s technology to the market.

With initial positive findings and the promise of further development through the University’s enhanced portfolio of research facilities, this project is a perfect demonstration of how the Khalifa University of Science and Technology is pushing the envelope of electronics and semiconductor technologies to help transform Abu Dhabi into a high-tech hub for research and entrepreneurship.

h/t Oct. 4, 2017 Nanowerk news item

Slightly restating it from the press release, a memristor is a nanoscale electrical component which mimics neural plasticity. Memristor combines the word ‘memory’ with ‘resistor’.

For those who’d like a little more, there are three components: capacitors, inductors, and resistors which make up an electrical circuit. The resistor is the circuit element which represents the resistance to the flow of electric current.  As for how this relates to the memristor (from the Memristor Wikipedia entry; Note: Links have been removed),

The memristor’s electrical resistance is not constant but depends on the history of current that had previously flowed through the device, i.e., its present resistance depends on how much electric charge has flowed in what direction through it in the past; the device remembers its history — the so-called non-volatility property.[2] When the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again

The memristor could lead to more energy-saving devices but much of the current (pun noted) interest lies in its similarity to neural plasticity and its potential application on neuromorphic engineering (brainlike computing).

Here’s a sampling of some of the more recent memristor postings on this blog:

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.

FrogHeart’s good-bye to 2017 and hello to 2018

This is going to be relatively short and sweet(ish). Starting with the 2017 review:

Nano blogosphere and the Canadian blogosphere

From my perspective there’s been a change taking place in the nano blogosphere over the last few years. There are fewer blogs along with fewer postings from those who still blog. Interestingly, some blogs are becoming more generalized. At the same time, Foresight Institute’s Nanodot blog (as has FrogHeart) has expanded its range of topics to include artificial intelligence and other topics. Andrew Maynard’s 2020 Science blog now exists in an archived from but before its demise, it, too, had started to include other topics, notably risk in its many forms as opposed to risk and nanomaterials. Dexter Johnson’s blog, Nanoclast (on the IEEE [Institute for Electrical and Electronics Engineers] website), maintains its 3x weekly postings. Tim Harper who often wrote about nanotechnology on his Cientifica blog appears to have found a more freewheeling approach that is dominated by his Twitter feed although he also seems (I can’t confirm that the latest posts were written in 2017) to blog here on timharper.net.

The Canadian science blogosphere seems to be getting quieter if Science Borealis (blog aggregator) is a measure. My overall impression is that the bloggers have been a bit quieter this year with fewer postings on the feed or perhaps that’s due to some technical issues (sometimes FrogHeart posts do not get onto the feed). On the promising side, Science Borealis teamed with the Science Writers and Communicators of Canada Association to run a contest, “2017 People’s Choice Awards: Canada’s Favourite Science Online!”  There were two categories (Favourite Science Blog and Favourite Science Site) and you can find a list of the finalists with links to the winners here.

Big congratulations for the winners: Canada’s Favourite Blog 2017: Body of Evidence (Dec. 6, 2017 article by Alina Fisher for Science Borealis) and Let’s Talk Science won Canada’s Favourite Science Online 2017 category as per this announcement.

However, I can’t help wondering: where were ASAP Science, Acapella Science, Quirks & Quarks, IFLS (I f***ing love science), and others on the list for finalists? I would have thought any of these would have a lock on a position as a finalist. These are Canadian online science purveyors and they are hugely popular, which should mean they’d have no problem getting nominated and getting votes. I can’t find the criteria for nominations (or any hint there will be a 2018 contest) so I imagine their nonpresence on the 2017 finalists list will remain a mystery to me.

Looking forward to 2018, I think that the nano blogosphere will continue with its transformation into a more general science/technology-oriented community. To some extent, I believe this reflects the fact that nanotechnology is being absorbed into the larger science/technology effort as foundational (something wiser folks than me predicted some years ago).

As for Science Borealis and the Canadian science online effort, I’m going to interpret the quieter feeds as a sign of a maturing community. After all, there are always ups and downs in terms of enthusiasm and participation and as I noted earlier the launch of an online contest is promising as is the collaboration with Science Writers and Communicators of Canada.

Canadian science policy

It was a big year.

Canada’s Chief Science Advisor

With Canada’s first chief science advisor in many years, being announced Dr. Mona Nemer stepped into her position sometime in Fall 2017. The official announcement was made on Sept. 26, 2017. I covered the event in my Sept. 26, 2017 posting, which includes a few more details than found the official announcement.

You’ll also find in that Sept. 26, 2017 posting a brief discourse on the Naylor report (also known as the Review of Fundamental Science) and some speculation on why, to my knowledge, there has been no action taken as a consequence.  The Naylor report was released April 10, 2017 and was covered here in a three-part review, published on June 8, 2017,

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 1 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 2 of 3

INVESTING IN CANADA’S FUTURE; Strengthening the Foundations of Canadian Research (Review of fundamental research final report): 3 of 3

I have found another commentary (much briefer than mine) by Paul Dufour on the Canadian Science Policy Centre website. (November 9, 2017)

Subnational and regional science funding

This began in 2016 with a workshop mentioned in my November 10, 2016 posting: ‘Council of Canadian Academies and science policy for Alberta.” By the time the report was published the endeavour had been transformed into: Science Policy: Considerations for Subnational Governments (report here and my June 22, 2017 commentary here).

I don’t know what will come of this but I imagine scientists will be supportive as it means more money and they are always looking for more money. Still, the new government in British Columbia has only one ‘science entity’ and I’m not sure it’s still operational but i was called the Premier’s Technology Council. To my knowledge, there is no ministry or other agency that is focused primarily or partially on science.

Meanwhile, a couple of representatives from the health sciences (neither of whom were involved in the production of the report) seem quite enthused about the prospects for provincial money in their (Bev Holmes, Interim CEO, Michael Smith Foundation for Health Research, British Columbia, and Patrick Odnokon (CEO, Saskatchewan Health Research Foundation) October 27, 2017 opinion piece for the Canadian Science Policy Centre.

Artificial intelligence and Canadians

An event which I find more interesting with time was the announcement of the Pan=Canadian Artificial Intelligence Strategy in the 2017 Canadian federal budget. Since then there has been a veritable gold rush mentality with regard to artificial intelligence in Canada. One announcement after the next about various corporations opening new offices in Toronto or Montréal has been made in the months since.

What has really piqued my interest recently is a report being written for Canada’s Treasury Board by Michael Karlin (you can learn more from his Twitter feed although you may need to scroll down past some of his more personal tweets (something cassoulet in the Dec. 29, 2017 tweets).  As for Karlin’s report, which is a work in progress, you can find out more about the report and Karlin in a December 12, 2017 article by Rob Hunt for the Algorithmic Media Observatory (sponsored by the Social Sciences and Humanities Research Council of Canada [SHRCC], the Centre for Study of Democratic Citizenship, and the Fonds de recherche du Québec: Société et culture).

You can ring in 2018 by reading and making comments, which could influence the final version, on Karlin’s “Responsible Artificial Intelligence in the Government of Canada” part of the government’s Digital Disruption White Paper Series.

As for other 2018 news, the Council of Canadian Academies is expected to publish “The State of Science and Technology and Industrial Research and Development in Canada” at some point soon (we hope). This report follows and incorporates two previous ‘states’, The State of Science and Technology in Canada, 2012 (the first of these was a 2006 report) and the 2013 version of The State of Industrial R&D in Canada. There is already some preliminary data for this latest ‘state of’  (you can find a link and commentary in my December 15, 2016 posting).

FrogHeart then (2017) and soon (2018)

On looking back I see that the year started out at quite a clip as I was attempting to hit the 5000th blog posting mark, which I did on March 3,  2017. I have cut back somewhat from the 3 postings/day high to approximately 1 posting/day. It makes things more manageable allowing me to focus on other matters.

By the way, you may note that the ‘Donate’ button has disappeared from my sidebard. I thank everyone who donated from the bottom of my heart. The money was more than currency, it also symbolized encouragement. On the sad side, I moved from one hosting service to a new one (Sibername) late in December 2016 and have been experiencing serious bandwidth issues which result on FrogHeart’s disappearance from the web for days at a time. I am trying to resolve the issues and hope that such actions as removing the ‘Donate’ button will help.

I wish my readers all the best for 2018 as we explore nanotechnology and other emerging technologies!

(I apologize for any and all errors. I usually take a little more time to write this end-of-year and coming-year piece but due to bandwidth issues I was unable to access my draft and give it at least one review. And at this point, I’m too tired to try spotting error. If you see any, please do let me know.)

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Artificial intelligence and metaphors

This is a different approach to artificial intelligence. From a June 27, 2017 news item on ScienceDaily,

Ask Siri to find a math tutor to help you “grasp” calculus and she’s likely to respond that your request is beyond her abilities. That’s because metaphors like “grasp” are difficult for Apple’s voice-controlled personal assistant to, well, grasp.

But new UC Berkeley research suggests that Siri and other digital helpers could someday learn the algorithms that humans have used for centuries to create and understand metaphorical language.

Mapping 1,100 years of metaphoric English language, researchers at UC Berkeley and Lehigh University in Pennsylvania have detected patterns in how English speakers have added figurative word meanings to their vocabulary.

The results, published in the journal Cognitive Psychology, demonstrate how throughout history humans have used language that originally described palpable experiences such as “grasping an object” to describe more intangible concepts such as “grasping an idea.”

Unfortunately, this image is not the best quality,

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

A June 27, 2017 University of California at Berkeley (or UC Berkeley) news release by Yasmin Anwar, which originated the news item,

“The use of concrete language to talk about abstract ideas may unlock mysteries about how we are able to communicate and conceptualize things we can never see or touch,” said study senior author Mahesh Srinivasan, an assistant professor of psychology at UC Berkeley. “Our results may also pave the way for future advances in artificial intelligence.”

The findings provide the first large-scale evidence that the creation of new metaphorical word meanings is systematic, researchers said. They can also inform efforts to design natural language processing systems like Siri to help them understand creativity in human language.

“Although such systems are capable of understanding many words, they are often tripped up by creative uses of words that go beyond their existing, pre-programmed vocabularies,” said study lead author Yang Xu, a postdoctoral researcher in linguistics and cognitive science at UC Berkeley.

“This work brings opportunities toward modeling metaphorical words at a broad scale, ultimately allowing the construction of artificial intelligence systems that are capable of creating and comprehending metaphorical language,” he added.

Srinivasan and Xu conducted the study with Lehigh University psychology professor Barbara Malt.

Using the Metaphor Map of English database, researchers examined more than 5,000 examples from the past millennium in which word meanings from one semantic domain, such as “water,” were extended to another semantic domain, such as “mind.”

Researchers called the original semantic domain the “source domain” and the domain that the metaphorical meaning was extended to, the “target domain.”

More than 1,400 online participants were recruited to rate semantic domains such as “water” or “mind” according to the degree to which they were related to the external world (light, plants), animate things (humans, animals), or intense emotions (excitement, fear).

These ratings were fed into computational models that the researchers had developed to predict which semantic domains had been the sources or targets of metaphorical extension.

In comparing their computational predictions against the actual historical record provided by the Metaphor Map of English, researchers found that their models correctly forecast about 75 percent of recorded metaphorical language mappings over the past millennium.

Furthermore, they found that the degree to which a domain is tied to experience in the external world, such as “grasping a rope,” was the primary predictor of how a word would take on a new metaphorical meaning such as “grasping an idea.”

For example, time and again, researchers found that words associated with textiles, digestive organs, wetness, solidity and plants were more likely to provide sources for metaphorical extension, while mental and emotional states, such as excitement, pride and fear were more likely to be the targets of metaphorical extension.

Scientists have created historical maps showing the evolution of metaphoric language. (Image courtesy of Mahesh Srinivasan)

Here’s a link to and a citation for the paper,

Evolution of word meanings through metaphorical mapping: Systematicity over the past millennium by Yang Xu, Barbara C. Malt, Mahesh Srinivasan. Cognitive Psychology Volume 96, August 2017, Pages 41–53 DOI: https://doi.org/10.1016/j.cogpsych.2017.05.005

The early web version of this paper is behind a paywall.

For anyone interested in the ‘Metaphor Map of English’ database mentioned in the news release, you find it here on the University of Glasgow website. By the way, it also seems to be known as ‘Mapping Metaphor with the Historical Thesaurus‘.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Meet Pepper, a robot for health care clinical settings

A Canadian project to introduce robots like Pepper into clinical settings (aside: can seniors’ facilities be far behind?) is the subject of a June 23, 2017 news item on phys.org,

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

A June 22, 2017 McMaster University news release, which originated the news item, provides more detail,

With the help of Softbank’s humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster’s Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients’ behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the [Canada] Science and Technology Museum in Ottawa.

“Pepper will help us highlight some very important aspects and motives of human behaviour and communication,” said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, ‘read’ emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: “We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients’ communications needs.”

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD’s Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

“This partnership is a testament to the collaborative nature of innovation,” said dean of FCAD, Charles Falzon. “I’m thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom.”

“This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care,” says McMaster’s Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and [Southern Ontario Smart Computing Innovation Platform] SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

“We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations,” said Harris Smith.

I just went to a presentation at the facility where my mother lives and it was all about delivering more individualized and better care for residents. Given that most seniors in British Columbia care facilities do not receive the number of service hours per resident recommended by the province due to funding issues, it seemed a well-meaning initiative offered in the face of daunting odds against success. Now with this news, I wonder what impact ‘Pepper’ might ultimately have on seniors and on the people who currently deliver service. Of course, this assumes that researchers will be able to tackle problems with understanding various accents and communication strategies, which are strongly influenced by culture and, over time, the aging process.

After writing that last paragraph I stumbled onto this June 27, 2017 Sage Publications press release on EurekAlert about a related matter,

Existing digital technologies must be exploited to enable a paradigm shift in current healthcare delivery which focuses on tests, treatments and targets rather than the therapeutic benefits of empathy. Writing in the Journal of the Royal Society of Medicine, Dr Jeremy Howick and Dr Sian Rees of the Oxford Empathy Programme, say a new paradigm of empathy-based medicine is needed to improve patient outcomes, reduce practitioner burnout and save money.

Empathy-based medicine, they write, re-establishes relationship as the heart of healthcare. “Time pressure, conflicting priorities and bureaucracy can make practitioners less likely to express empathy. By re-establishing the clinical encounter as the heart of healthcare, and exploiting available technologies, this can change”, said Dr Howick, a Senior Researcher in Oxford University’s Nuffield Department of Primary Care Health Sciences.

Technology is already available that could reduce the burden of practitioner paperwork by gathering basic information prior to consultation, for example via email or a mobile device in the waiting room.

During the consultation, the computer screen could be placed so that both patient and clinician can see it, a help to both if needed, for example, to show infographics on risks and treatment options to aid decision-making and the joint development of a treatment plan.

Dr Howick said: “The spread of alternatives to face-to-face consultations is still in its infancy, as is our understanding of when a machine will do and when a person-to-person relationship is needed.” However, he warned, technology can also get in the way. A computer screen can become a barrier to communication rather than an aid to decision-making. “Patients and carers need to be involved in determining the need for, and designing, new technologies”, he said.

I sincerely hope that the Canadian project has taken into account some of the issues described in the ’empathy’ press release and in the article, which can be found here,

Overthrowing barriers to empathy in healthcare: empathy in the age of the Internet
by J Howick and S Rees. Journaly= of the Royal Society of Medicine Article first published online: June 27, 2017 DOI: https://doi.org/10.1177/0141076817714443

This article is open access.

Hacking the human brain with a junction-based artificial synaptic device

Earlier today I published a piece featuring Dr. Wei Lu’s work on memristors and the movement to create an artificial brain (my June 28, 2017 posting: Dr. Wei Lu and bio-inspired ‘memristor’ chips). For this posting I’m featuring a non-memristor (if I’ve properly understood the technology) type of artificial synapse. From a June 28, 2017 news item on Nanowerk,

One of the greatest challenges facing artificial intelligence development is understanding the human brain and figuring out how to mimic it.

Now, one group reports in ACS Nano (“Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device”) that they have developed an artificial synapse capable of simulating a fundamental function of our nervous system — the release of inhibitory and stimulatory signals from the same “pre-synaptic” terminal.

Unfortunately, the American Chemical Society news release on EurekAlert, which originated the news item, doesn’t provide too much more detail,

The human nervous system is made up of over 100 trillion synapses, structures that allow neurons to pass electrical and chemical signals to one another. In mammals, these synapses can initiate and inhibit biological messages. Many synapses just relay one type of signal, whereas others can convey both types simultaneously or can switch between the two. To develop artificial intelligence systems that better mimic human learning, cognition and image recognition, researchers are imitating synapses in the lab with electronic components. Most current artificial synapses, however, are only capable of delivering one type of signal. So, Han Wang, Jing Guo and colleagues sought to create an artificial synapse that can reconfigurably send stimulatory and inhibitory signals.

The researchers developed a synaptic device that can reconfigure itself based on voltages applied at the input terminal of the device. A junction made of black phosphorus and tin selenide enables switching between the excitatory and inhibitory signals. This new device is flexible and versatile, which is highly desirable in artificial neural networks. In addition, the artificial synapses may simplify the design and functions of nervous system simulations.

Here’s how I concluded that this is not a memristor-type device (from the paper [first paragraph, final sentence]; a link and citation will follow; Note: Links have been removed)),

The conventional memristor-type [emphasis mine](14-20) and transistor-type(21-25) artificial synapses can realize synaptic functions in a single semiconductor device but lacks the ability [emphasis mine] to dynamically reconfigure between excitatory and inhibitory responses without the addition of a modulating terminal.

Here’s a link to and a citation for the paper,

Emulating Bilingual Synaptic Response Using a Junction-Based Artificial Synaptic Device by
He Tian, Xi Cao, Yujun Xie, Xiaodong Yan, Andrew Kostelec, Don DiMarzio, Cheng Chang, Li-Dong Zhao, Wei Wu, Jesse Tice, Judy J. Cha, Jing Guo, and Han Wang. ACS Nano, Article ASAP DOI: 10.1021/acsnano.7b03033 Publication Date (Web): June 28, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.