Monthly Archives: March 2017

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

Would you like to invest in the Argonne National Laboratory’s reusable oil spill sponge?

A March 7, 2017 news item on phys.org describes some of the US Argonne National Laboratory’s research into oil spill cleanup technology,

When the Deepwater Horizon drilling pipe blew out seven years ago, beginning the worst oil spill [BP oil spill in the Gulf of Mexico] in U.S. history, those in charge of the recovery discovered a new wrinkle: the millions of gallons of oil bubbling from the sea floor weren’t all collecting on the surface where it could be skimmed or burned. Some of it was forming a plume and drifting through the ocean under the surface.

Now, scientists at the U.S. Department of Energy’s (DOE) Argonne National Laboratory have invented a new foam, called Oleo Sponge, that addresses this problem. The material not only easily adsorbs oil from water, but is also reusable and can pull dispersed oil from the entire water column—not just the surface.

A March 6, 2017 Argonne National Laboratory news release (also on EurekAlert) by Louise Lerner, which originated the news item, provides more information about the work,

“The Oleo Sponge offers a set of possibilities that, as far as we know, are unprecedented,” said co-inventor Seth Darling, a scientist with Argonne’s Center for Nanoscale Materials and a fellow of the University of Chicago’s Institute for Molecular Engineering.

We already have a library of molecules that can grab oil, but the problem is how to get them into a useful structure and bind them there permanently.

The scientists started out with common polyurethane foam, used in everything from furniture cushions to home insulation. This foam has lots of nooks and crannies, like an English muffin, which could provide ample surface area to grab oil; but they needed to give the foam a new surface chemistry in order to firmly attach the oil-loving molecules.

Previously, Darling and fellow Argonne chemist Jeff Elam had developed a technique called sequential infiltration synthesis, or SIS, which can be used to infuse hard metal oxide atoms within complicated nanostructures.

After some trial and error, they found a way to adapt the technique to grow an extremely thin layer of metal oxide “primer” near the foam’s interior surfaces. This serves as the perfect glue for attaching the oil-loving molecules, which are deposited in a second step; they hold onto the metal oxide layer with one end and reach out to grab oil molecules with the other.

The result is Oleo Sponge, a block of foam that easily adsorbs oil from the water. The material, which looks a bit like an outdoor seat cushion, can be wrung out to be reused—and the oil itself recovered.

Oleo Sponge

At tests at a giant seawater tank in New Jersey called Ohmsett, the National Oil Spill Response Research & Renewable Energy Test Facility, the Oleo Sponge successfully collected diesel and crude oil from both below and on the water surface.

“The material is extremely sturdy. We’ve run dozens to hundreds of tests, wringing it out each time, and we have yet to see it break down at all,” Darling said.

Oleo Sponge could potentially also be used routinely to clean harbors and ports, where diesel and oil tend to accumulate from ship traffic, said John Harvey, a business development executive with Argonne’s Technology Development and Commercialization division.

Elam, Darling and the rest of the team are continuing to develop the technology.

“The technique offers enormous flexibility, and can be adapted to other types of cleanup besides oil in seawater. You could attach a different molecule to grab any specific substance you need,” Elam said.

The team is actively looking to commercialize [emphasis mine] the material, Harvey said; those interested in licensing the technology or collaborating with the laboratory on further development may contact partners@anl.gov.

Here’s a link to and a citation for the paper,

Advanced oil sorbents using sequential infiltration synthesis by Edward Barry, Anil U. Mane, Joseph A. Libera, Jeffrey W. Elam, and Seth B. Darling. J. Mater. Chem. A, 2017,5, 2929-2935 DOI: 10.1039/C6TA09014A First published online 11 Jan 2017

This paper is behind a paywall.

The two most recent posts here featuring oil spill technology are my Nov. 3, 2016 piece titled: Oil spill cleanup nanotechnology-enabled solution from A*STAR and my Sept. 15, 2016 piece titled: Canada’s Ingenuity Lab receives a $1.7M grant to develop oil recovery system for oil spills. I hope that one of these days someone manages to commercialize at least one of the new oil spill technologies. It seems that there hasn’t been much progress since the BP (Deepwater Horizon) oil spill. If someone has better information than I do about the current state of oil spill cleanup technologies, please do leave a comment.

Matrix of gelatin nanofibres for culturing large quantities of human stem cells

A Feb. 14, 2017 news item on ScienceDaily describes work that may have a big influence on stem cell production,

A new nanofiber-on-microfiber matrix could help produce more and better quality stem cells for disease treatment and regenerative therapies.

A matrix made of gelatin nanofibers on a synthetic polymer microfiber mesh may provide a better way to culture large quantities of healthy human stem cells.

Developed by a team of researchers led by Ken-ichiro Kamei of Kyoto University’s Institute for Integrated Cell-Material Sciences (iCeMS), the ‘fiber-on-fiber’ (FF) matrix improves on currently available stem cell culturing techniques.

A Feb. 14/15, 2017 Kyoto University press release (also on EurekAlert), which originated the news item, explains why scientists are trying to find a new way to culture stem cells,

Researchers have been developing 3D culturing systems to allow human pluripotent stem cells (hPSCs) to grow and interact with their surroundings in all three dimensions, as they would inside the human body, rather than in two dimensions, like they do in a petri dish.

Pluripotent stem cells have the ability to differentiate into any type of adult cell and have huge potential for tissue regeneration therapies, treating diseases, and for research purposes.

Most currently reported 3D culturing systems have limitations, and result in low quantities and quality of cultured cells.

Kamei and his colleagues fabricated gelatin nanofibers onto a microfiber sheet made of synthetic, biodegradable polyglycolic acid. Human embryonic stem cells were then seeded onto the matrix in a cell culture medium.

The FF matrix allowed easy exchange of growth factors and supplements from the culture medium to the cells. Also, the stem cells adhered well to the matrix, resulting in robust cell growth: after four days of culture, more than 95% of the cells grew and formed colonies.

The team also scaled up the process by designing a gas-permeable cell culture bag in which multiple cell-loaded, folded FF matrices were placed. The system was designed so that minimal changes were needed to the internal environment, reducing the amount of stress placed on the cells. This newly developed system yielded a larger number of cells compared to conventional 2D and 3D culture methods.

“Our method offers an efficient way to expand hPSCs of high quality within a shorter term,” write the researchers in their study published in the journal Biomaterials. Also, because the use of the FF matrix is not limited to a specific type of culture container, it allows for scaling up production without loss of cell functions. “Additionally, as nanofiber matrices are advantageous for culturing other adherent cells, including hPSC-derived differentiated cells, FF matrix might be applicable to the large-scale production of differentiated functional cells for various applications,” the researchers conclude.

Human stem cells that grew on the ‘fiber-on-fiber’ culturing system

Here’s a link to and a citation for the paper,

Nano-on-micro fibrous extracellular matrices for scalable expansion of human ES/iPS cells by Li Liu, Ken-ichiro Kamei, Momoko Yoshioka, Minako Nakajima, Junjun Li, Nanae Fujimoto, Shiho Terada, Yumie Tokunaga, Yoshie Koyama, Hideki Sato, Kouichi Hasegawa. Biomaterials Volume 124, April 2017, Pages 47–54  http://dx.doi.org/10.1016/j.biomaterials.2017.01.039

This paper is behind a paywall.

Harvesting plants for electricity

A Feb. 27, 2017 article on Nanowerk describes research which could turn living plants into solar cells and panels (Note: Links have been removed),

Plants power life on Earth. They are the original food source supplying energy to almost all living organisms and the basis of the fossil fuels that feed the power demands of the modern world. But burning the remnants of long-dead forests is changing the world in dangerous ways. Can we better harness the power of living plants today?

One way might be to turn plants into natural solar power stations that could convert sunlight into energy far more efficiently. To do this, we’d need a way of getting the energy out in the form of electricity. One company has found a way to harvest electrons deposited by plants into the soil beneath them. But new research (PNAS, “In vivo polymerization and manufacturing of wires and supercapacitors in plants”) from Finland looks at tapping plants’ energy directly by turning their internal structures into electric circuits.

A Feb. 27, 2017 essay by Stuart Thompson for The Conversation (which originated the article) explains the principles underlying the research (Note: A link has been removed),

Plants contain water-filled tubes called “xylem elements” that carry water from their roots to their leaves. The water flow also carries and distributes dissolved nutrients and other things such as chemical signals. The Finnish researchers, whose work is published in PNAS, developed a chemical that was fed into a rose cutting to form a solid material that could carry and store electricity.

Previous experiments have used a chemical called PEDOT to form conducting wires in the xylem, but it didn’t penetrate further into the plant. For the new research, they designed a molecule called ETE-S that forms similar electrical conductors but can also be carried wherever the stream of water travelling though the xylem goes.

This flow is driven by the attraction between water molecules. When water in a leaf evaporates, it pulls on the chain of molecules left behind, dragging water up through the plant all the way from the roots. You can see this for yourself by placing a plant cutting in food colouring and watching the colour move up through the xylem. The researchers’ method was so similar to the food colouring experiment that they could see where in the plant their electrical conductor had travelled to from its colour.

The result was a complex electronic network permeating the leaves and petals, surrounding their cells and replicating their pattern. The wires that formed conducted electricity up to a hundred times better than those made from PEDOT and could also store electrical energy in the same way as an electronic component called a capacitor.

I recommend reading Thompson’s piece in its entirety.

Mimicking the architecture of materials like wood and bone

Caption: Microstructures like this one developed at Washington State University could be used in batteries, lightweight ultrastrong materials, catalytic converters, supercapacitors and biological scaffolds. Credit: Washington State University

A March 3, 2017 news item on Nanowerk features a new 3D manufacturing technique for creating biolike materials, (Note: A link has been removed)

Washington State University nanotechnology researchers have developed a unique, 3-D manufacturing method that for the first time rapidly creates and precisely controls a material’s architecture from the nanoscale to centimeters. The results closely mimic the intricate architecture of natural materials like wood and bone.

They report on their work in the journal Science Advances (“Three-dimensional microarchitected materials and devices using nanoparticle assembly by pointwise spatial printing”) and have filed for a patent.

A March 3, 2017 Washington State University news release by Tina Hilding (also on EurekAlert), which originated the news item, expands on the theme,

“This is a groundbreaking advance in the 3-D architecturing of materials at nano- to macroscales with applications in batteries, lightweight ultrastrong materials, catalytic converters, supercapacitors and biological scaffolds,” said Rahul Panat, associate professor in the School of Mechanical and Materials Engineering, who led the research. “This technique can fill a lot of critical gaps for the realization of these technologies.”

The WSU research team used a 3-D printing method to create foglike microdroplets that contain nanoparticles of silver and to deposit them at specific locations. As the liquid in the fog evaporated, the nanoparticles remained, creating delicate structures. The tiny structures, which look similar to Tinkertoy constructions, are porous, have an extremely large surface area and are very strong.

Silver was used because it is easy to work with. However, Panat said, the method can be extended to any other material that can be crushed into nanoparticles – and almost all materials can be.

The researchers created several intricate and beautiful structures, including microscaffolds that contain solid truss members like a bridge, spirals, electronic connections that resemble accordion bellows or doughnut-shaped pillars.

The manufacturing method itself is similar to a rare, natural process in which tiny fog droplets that contain sulfur evaporate over the hot western Africa deserts and give rise to crystalline flower-like structures called “desert roses.”

Because it uses 3-D printing technology, the new method is highly efficient, creates minimal waste and allows for fast and large-scale manufacturing.

The researchers would like to use such nanoscale and porous metal structures for a number of industrial applications; for instance, the team is developing finely detailed, porous anodes and cathodes for batteries rather than the solid structures that are now used. This advance could transform the industry by significantly increasing battery speed and capacity and allowing the use of new and higher energy materials.

Here’s a link to and a citation for the paper,

Three-dimensional microarchitected materials and devices using nanoparticle assembly by pointwise spatial printing by Mohammad Sadeq Saleh, Chunshan Hu, and Rahul Panat. Science Advances  03 Mar 2017: Vol. 3, no. 3, e1601986 DOI: 10.1126/sciadv.1601986

This paper appears to be open access.

Finally, there is a video,

3D printed biomimetic blood vessel networks

An artificial blood vessel network that could lead the way to regenerating biologically-based blood vessel networks has been printed in 3D at the University of California at San Diego (UCSD) according to a March 2, 2017 news item on ScienceDaily,

Nanoengineers at the University of California San Diego have 3D printed a lifelike, functional blood vessel network that could pave the way toward artificial organs and regenerative therapies.

The new research, led by nanoengineering professor Shaochen Chen, addresses one of the biggest challenges in tissue engineering: creating lifelike tissues and organs with functioning vasculature — networks of blood vessels that can transport blood, nutrients, waste and other biological materials — and do so safely when implanted inside the body.

A March 2, 2017 UCSD news release (also on EurekAlert), which originated the news item, explains why this is an important development,

Researchers from other labs have used different 3D printing technologies to create artificial blood vessels. But existing technologies are slow, costly and mainly produce simple structures, such as a single blood vessel — a tube, basically. These blood vessels also are not capable of integrating with the body’s own vascular system.

“Almost all tissues and organs need blood vessels to survive and work properly. This is a big bottleneck in making organ transplants, which are in high demand but in short supply,” said Chen, who leads the Nanobiomaterials, Bioprinting, and Tissue Engineering Lab at UC San Diego. “3D bioprinting organs can help bridge this gap, and our lab has taken a big step toward that goal.”

Chen’s lab has 3D printed a vasculature network that can safely integrate with the body’s own network to circulate blood. These blood vessels branch out into many series of smaller vessels, similar to the blood vessel structures found in the body. The work was published in Biomaterials.

Chen’s team developed an innovative bioprinting technology, using their own homemade 3D printers, to rapidly produce intricate 3D microstructures that mimic the sophisticated designs and functions of biological tissues. Chen’s lab has used this technology in the past to create liver tissue and microscopic fish that can swim in the body to detect and remove toxins.

Researchers first create a 3D model of the biological structure on a computer. The computer then transfers 2D snapshots of the model to millions of microscopic-sized mirrors, which are each digitally controlled to project patterns of UV light in the form of these snapshots. The UV patterns are shined onto a solution containing live cells and light-sensitive polymers that solidify upon exposure to UV light. The structure is rapidly printed one layer at a time, in a continuous fashion, creating a 3D solid polymer scaffold encapsulating live cells that will grow and become biological tissue.

“We can directly print detailed microvasculature structures in extremely high resolution. Other 3D printing technologies produce the equivalent of ‘pixelated’ structures in comparison and usually require sacrificial materials and additional steps to create the vessels,” said Wei Zhu, a postdoctoral scholar in Chen’s lab and a lead researcher on the project.

And this entire process takes just a few seconds — a vast improvement over competing bioprinting methods, which normally take hours just to print simple structures. The process also uses materials that are inexpensive and biocompatible.

Chen’s team used medical imaging to create a digital pattern of a blood vessel network found in the body. Using their technology, they printed a structure containing endothelial cells, which are cells that form the inner lining of blood vessels.

The entire structure fits onto a small area measuring 4 millimeters × 5 millimeters, 600 micrometers thick (as thick as a stack containing 12 strands of human hair).

Researchers cultured several structures in vitro for one day, then grafted the resulting tissues into skin wounds of mice. After two weeks, the researchers examined the implants and found that they had successfully grown into and merged with the host blood vessel network, allowing blood to circulate normally.

Chen noted that the implanted blood vessels are not yet capable of other functions, such as transporting nutrients and waste. “We still have a lot of work to do to improve these materials. This is a promising step toward the future of tissue regeneration and repair,” he said.

Moving forward, Chen and his team are working on building patient-specific tissues using human induced pluripotent stem cells, which would prevent transplants from being attacked by a patient’s immune system. And since these cells are derived from a patient’s skin cells, researchers won’t need to extract any cells from inside the body to build new tissue. The team’s ultimate goal is to move their work to clinical trials. “It will take at least several years before we reach that goal,” Chen said.

Here’s a link to and a citation for the paper,

Direct 3D bioprinting of prevascularized tissue constructs with complex microarchitecture by Wei Zhu, Xin Qu, Jie Zhu, Xuanyi Ma, Sherrina Patel, Justin Liu, Pengrui Wang, Cheuk Sun Edwin Lai, Maling Gou, Yang Xu, Kang Zhang, Shaochen Chen. Biomaterials 124 (April 2017) 106-15 http://dx.doi.org/10.1016/j.biomaterials.2017.01.042

This paper is behind a paywall.

There is also an open access copy here on the university website but I cannot confirm that it is identical to the version in the journal.

Entangling a single photon with a trillion atoms

Polish scientists have cast light on an eighty-year old ‘paradox’ according to a March 2, 2017 news item on plys.org,

A group of researchers from the Faculty of Physics at the University of Warsaw has shed new light on the famous paradox of Einstein, Podolsky and Rosen after 80 years. They created a multidimensional entangled state of a single photon and a trillion hot rubidium atoms, and stored this hybrid entanglement in the laboratory for several microseconds. …

In their famous Physical Review article, published in 1935, Einstein, Podolsky and Rosen considered the decay of a particle into two products. In their thought experiment, two products of decay were projected in exactly opposite directions—or more scientifically speaking, their momenta were anti-correlated. Though not be a mystery within the framework of classical physics, when applying the rules of quantum theory, the three researchers arrived at a paradox. The Heisenberg uncertainty principle, dictating that position and momentum of a particle cannot be measured at the same time, lies at the center of this paradox. In Einstein’s thought experiment, it is possible to measure the momentum of one particle and immediately know the momentum of the other without measurement, as it is exactly opposite. Then, by measuring the position of the second particle, the Heisenberg uncertainty principle is seemingly violated, an apparent paradox that seriously baffled the three physicists.

A March 2, 2017 University of Warsaw press release (also on EurekAlert), which originated the news item, expands on the topic,

Only today we know that this experiment is not, in fact, a paradox. The mistake of Einstein and co-workers was to use one-particle uncertainty principle to a system of two particles. If we treat these two particles as described by a single quantum state, we learn that the original uncertainty principle ceases to apply, especially if these particles are entangled.

In the Quantum Memories Laboratory at the University of Warsaw, the group of three physicists was first to create such an entangled state consisting of a macroscopic object – a group of about one trillion atoms, and a single photon – a particle of light. “Single photons, scattered during the interaction of a laser beam with atoms, are registered on a sensitive camera. A single registered photon carries information about the quantum state of the entire group of atoms. The atoms may be stored, and their state may be retrieved on demand.” – says Michal Dabrowski, PhD student and co-author of the article.

The results of the experiment confirm that the atoms and the single photon are in a joint, entangled state. By measuring position and momentum of the photon, we gain all information about the state of atoms. To confirm this, polish scientists convert the atomic state into another photon, which again is measured using the state-of-the-art camera developed in the Quantum Memories Laboratory. “We demonstrate the Einstein-Podolsky-Rosen apparent paradox in a very similar version as originally proposed in 1935, however we extend the experiment by adding storage of light within the large group of atoms. Atoms store the photon in a form of a wave made of atomic spins, containing one trillion atoms. Such a state is very robust against loss of a single atoms, as information is spread across so many particles.” – says Michal Parniak, PhD student taking part in the study.

The experiment performed by the group from the University of Warsaw is unique in one other way as well. The quantum memory storing the entangled state, created thanks to “PRELUDIUM” grant from the Poland’s National Science Centre and “Diamentowy Grant” from the Polish Ministry of Science and Higher Education, allows for storage of up to 12 photons at once. This enhanced capacity is promising in terms of applications in quantum information processing. “The multidimensional entanglement is stored in our device for several microseconds, which is roughly a thousand times longer than in any previous experiments, and at the same time long enough to perform subtle quantum operations on the atomic state during storage” – explains Dr. Wojciech Wasilewski, group leader of the Quantum Memories Laboratory team.

The entanglement in the real and momentum space, described in the Optica article, can be used jointly with other well-known degrees of freedom such as polarization, allowing generation of so-called hyper-entanglement. Such elaborate ideas constitute new and original test of the fundamentals of quantum mechanics – a theory that is unceasingly mysterious yet brings immense technological progress.

Here’s a link to and a citation for the paper,

Einstein–Podolsky–Rosen paradox in a hybrid bipartite system by Michał Dąbrowski, Michał Parniak, and Wojciech Wasilewski. Optica Vol. 4, Issue 2, pp. 272-275 (2017) •https://doi.org/10.1364/OPTICA.4.000272

This paper appears to be open access.

ArtSci salon at the University of Toronto opens its Cabinet Project on April 6, 2017

I announced The Cabinet Project in a Sept. 1, 2016 posting,

The ArtSci Salon; A Hub for the Arts & Science communities in Toronto and Beyond is soliciting proposals for ‘The Cabinet Project; An artsci exhibition about cabinets‘ to be held *March 30 – May 1* 2017 at the University of Toronto in a series of ‘science cabinets’ found around campus,

Despite being in full sight, many cabinets and showcases at universities and scientific institutions lie empty or underutilized. Located at the entrance of science departments, in proximity of laboratories, or in busy areas of transition, some contain outdated posters, or dusty scientific objects that have been forgotten there for years. Others lie empty, like old furniture on the curb after a move, waiting for a lucky passer-by in need. The ceaseless flow of bodies walking past these cabinets – some running to meetings, some checking their schedule, some immersed in their thoughts – rarely pay attention to them.

My colleague and I made a submission, which was not accepted (drat). In any event, I was somewhat curious as to which proposals had been successfu. Here they are in a March 24, 2017 ArtSci Salon notice (received via email),

Join us to the opening of
The Cabinet Project
on April 6, 2017

* 4:00 PM Introduction and dry reception -THE FIELDS INSTITUTE FOR
RESEARCH IN MATHEMATICAL SCIENCES

* 4:30 – 6:30 Tour of the Exhibition with the artists
* 6:30 – 9:00 Reception at VICTORIA COLLEGE

All Welcome
You can join at any time during the tour

More information can be found at
http://artscisalon.com/the-cabinet-project

RSVP Here

About The Cabinet Project

The Cabinet Project is a distributed exhibition bringing to life historical, anecdotal and imagined stories evoked by scientific objects, their surrounding spaces and the individuals inhabiting them. The goal is to make the intense creativity existing inside science laboratories visible, and to suggest potential interactions between the sciences and the arts. to achieve this goal, 12 artists have turned 10 cabinets across the University of Toronto  into art installations.

Featuring works by: Catherine Beaudette; Nina Czegledy; Dave Kemp & Jonathon Anderson; Joel Ong & Mick Lorusso; Microcollection;  Nicole Clouston; Nicole Liao;  Rick Hyslop;  Stefan Herda; Stefanie Kuzmiski

You can find out about the project, the artists, the program, and more on The Cabinet Project webpage here.

Ishiguro’s robots and Swiss scientist question artificial intelligence at SXSW (South by Southwest) 2017

It seems unexpected to stumble across presentations on robots and on artificial intelligence at an entertainment conference such as South by South West (SXSW). Here’s why I thought so, from the SXSW Wikipedia entry (Note: Links have been removed),

South by Southwest (abbreviated as SXSW) is an annual conglomerate of film, interactive media, and music festivals and conferences that take place in mid-March in Austin, Texas, United States. It began in 1987, and has continued to grow in both scope and size every year. In 2011, the conference lasted for 10 days with SXSW Interactive lasting for 5 days, Music for 6 days, and Film running concurrently for 9 days.

Lifelike robots

The 2017 SXSW Interactive featured separate presentations by Japanese roboticist, Hiroshi Ishiguro (mentioned here a few times), and EPFL (École Polytechnique Fédérale de Lausanne; Switzerland) artificial intelligence expert, Marcel Salathé.

Ishiguro’s work is the subject of Harry McCracken’s March 14, 2017 article for Fast Company (Note: Links have been removed),

I’m sitting in the Japan Factory pavilion at SXSW in Austin, Texas, talking to two other attendees about whether human beings are more valuable than robots. I say that I believe human life to be uniquely precious, whereupon one of the others rebuts me by stating that humans allow cars to exist even though they kill humans.

It’s a reasonable point. But my fellow conventioneer has a bias: It’s a robot itself, with an ivory-colored, mask-like face and visible innards. So is the third participant in the conversation, a much more human automaton modeled on a Japanese woman and wearing a black-and-white blouse and a blue scarf.

We’re chatting as part of a demo of technologies developed by the robotics lab of Hiroshi Ishiguro, based at Osaka University, and Japanese telecommunications company NTT. Ishiguro has gained fame in the field by creating increasingly humanlike robots—that is, androids—with the ultimate goal of eliminating the uncanny valley that exists between people and robotic people.

I also caught up with Ishiguro himself at the conference—his second SXSW—to talk about his work. He’s a champion of the notion that people will respond best to robots who simulate humanity, thereby creating “a feeling of presence,” as he describes it. That gives him and his researchers a challenge that encompasses everything from technology to psychology. “Our approach is quite interdisciplinary,” he says, which is what prompted him to bring his work to SXSW.

A SXSW attendee talks about robots with two robots.

If you have the time, do read McCracken’t piece in its entirety.

You can find out more about the ‘uncanny valley’ in my March 10, 2011 posting about Ishiguro’s work if you scroll down about 70% of the way to find the ‘uncanny valley’ diagram and Masahiro Mori’s description of the concept he developed.

You can read more about Ishiguro and his colleague, Ryuichiro Higashinaka, on their SXSW biography page.

Artificial intelligence (AI)

In a March 15, 2017 EPFL press release by Hilary Sanctuary, scientist Marcel Salathé poses the question: Is Reliable Artificial Intelligence Possible?,

In the quest for reliable artificial intelligence, EPFL scientist Marcel Salathé argues that AI technology should be openly available. He will be discussing the topic at this year’s edition of South by South West on March 14th in Austin, Texas.

Will artificial intelligence (AI) change the nature of work? For EPFL theoretical biologist Marcel Salathé, the answer is invariably yes. To him, a more fundamental question that needs to be addressed is who owns that artificial intelligence?

“We have to hold AI accountable, and the only way to do this is to verify it for biases and make sure there is no deliberate misinformation,” says Salathé. “This is not possible if the AI is privatized.”

AI is both the algorithm and the data

So what exactly is AI? It is generally regarded as “intelligence exhibited by machines”. Today, it is highly task specific, specially designed to beat humans at strategic games like Chess and Go, or diagnose skin disease on par with doctors’ skills.

On a practical level, AI is implemented through what scientists call “machine learning”, which means using a computer to run specifically designed software that can be “trained”, i.e. process data with the help of algorithms and to correctly identify certain features from that data set. Like human cognition, AI learns by trial and error. Unlike humans, however, AI can process and recall large quantities of data, giving it a tremendous advantage over us.

Crucial to AI learning, therefore, is the underlying data. For Salathé, AI is defined by both the algorithm and the data, and as such, both should be publicly available.

Deep learning algorithms can be perturbed

Last year, Salathé created an algorithm to recognize plant diseases. With more than 50,000 photos of healthy and diseased plants in the database, the algorithm uses artificial intelligence to diagnose plant diseases with the help of your smartphone. As for human disease, a recent study by a Stanford Group on cancer showed that AI can be trained to recognize skin cancer slightly better than a group of doctors. The consequences are far-reaching: AI may one day diagnose our diseases instead of doctors. If so, will we really be able to trust its diagnosis?

These diagnostic tools use data sets of images to train and learn. But visual data sets can be perturbed that prevent deep learning algorithms from correctly classifying images. Deep neural networks are highly vulnerable to visual perturbations that are practically impossible to detect with the naked eye, yet causing the AI to misclassify images.

In future implementations of AI-assisted medical diagnostic tools, these perturbations pose a serious threat. More generally, the perturbations are real and may already be affecting the filtered information that reaches us every day. These vulnerabilities underscore the importance of certifying AI technology and monitoring its reliability.

h/t phys.org March 15, 2017 news item

As I noted earlier, these are not the kind of presentations you’d expect at an ‘entertainment’ festival.