Tag Archives: U of T

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Encapsulation of proteins in nanoparticles no longer necessary for time release?

A team of researchers at the University of Toronto (Canada) have developed a technique for the therapeutic use of proteins that doesn’t require ‘nanoencapsulation’ although nanoparticles are still used according to a May 27, 2016 news item on ScienceDaily,

A U of T [University of Toronto] Engineering team has designed a simpler way to keep therapeutic proteins where they are needed for long periods of time. The discovery is a potential game-changer for the treatment of chronic illnesses or injuries that often require multiple injections or daily pills.

For decades, biomedical engineers have been painstakingly encapsulating proteins in nanoparticles to control their release. Now, a research team led by University Professor Molly Shoichet has shown that proteins can be released over several weeks, even months, without ever being encapsulated. In this case the team looked specifically at therapeutic proteins relevant to tissue regeneration after stroke and spinal cord injury.

“It was such a surprising and unexpected discovery,” said co-lead author Dr. Irja Elliott Donaghue, who first found that the therapeutic protein NT3, a factor that promotes the growth of nerve cells, was slowly released when just mixed into a Jello-like substance that also contained nanoparticles. “Our first thought was, ‘What could be happening to cause this?'”

A May 27, 2016 University of Toronto news release (also on EurekAlert) by Marit Mitchell, which originated the news item, provides more in depth explanation,

Proteins hold enormous promise to treat chronic conditions and irreversible injuries — for example, human growth hormone is encapsulated in these tiny polymeric particles, and used to treat children with stunted growth. In order to avoid repeated injections or daily pills, researchers use complicated strategies both to deliver proteins to their site of action, and to ensure they’re released over a long enough period of time to have a beneficial effect.

This has long been a major challenge for protein-based therapies, especially because proteins are large and often fragile molecules. Until now, investigators have been treating proteins the same way as small drug molecules and encapsulating them in polymeric nanoparticles, often made of a material called poly(lactic-co-glycolic acid) or PLGA.

As the nanoparticles break down, the drug molecules escape. The same process is true for proteins; however, the encapsulating process itself often damages or denatures some of the encapsulated proteins, rendering them useless for treatment. Skipping encapsulation altogether means fewer denatured proteins, making for more consistent protein therapeutics that are easier to make and store.

“This is really exciting from a translational perspective,” said PhD candidate Jaclyn Obermeyer. “Having a simpler, more reliable fabrication process leaves less room for complications with scale-up for clinical use.”

The three lead authors, Elliott Donoghue, Obermeyer and Dr. Malgosia Pakulska have shown that to get the desired controlled release, proteins only need to be alongside the PLGA nanoparticles, not inside them. …

“We think that this could speed up the path for protein-based drugs to get to the clinic,” said Elliott Donaghue.

The mechanism for this encapsulation-free controlled release is surprisingly elegant. Shoichet’s group mixes the proteins and nanoparticles in a Jello-like substance called a hydrogel, which keeps them localized when injected at the site of injury. The positively charged proteins and negatively charged nanoparticles naturally stick together. As the nanoparticles break down they make the solution more acidic, weakening the attraction and letting the proteins break free.

“We are particularly excited to show long-term, controlled protein release by simply controlling the electrostatic interactions between proteins and polymeric nanobeads,” said Shoichet. “By manipulating the pH of the solution, the size and number of nanoparticles, we can control release of bioactive proteins. This has already changed and simplified the protein release strategies that we are pursuing in pre-clinical models of disease in the brain and spinal cord.”

“We’ve learned how to control this simple phenomena,” Pakulska said. “Our next question is whether we can do the opposite—design a similar release system for positively charged nanoparticles and negatively charged proteins.”

Here’s a link to and a citation for the paper,

Encapsulation-free controlled release: Electrostatic adsorption eliminates the need for protein encapsulation in PLGA nanoparticles by Malgosia M. Pakulska, Irja Elliott Donaghue, Jaclyn M. Obermeyer, Anup Tuladhar, Christopher K. McLaughlin, Tyler N. Shendruk, and Molly S. Shoichet. Science Advances  27 May 2016: Vol. 2, no. 5, e1600519 DOI: 10.1126/sciadv.1600519

This paper appears to be open access.

Dr. Molly Shoichet was featured here in a May 11, 2015 posting about the launch of her Canada-wide science communication project Research2.Reality.

Interacting photons and quantum logic gates

University of Toronto physicists have taken the first step toward ‘working with pure light’ according to an August 25, 2015 news item on Nanotechnology Now,

A team of physicists at the University of Toronto (U of T) have taken a step toward making the essential building block of quantum computers out of pure light. Their advance, described in a paper published this week in Nature Physics, has to do with a specific part of computer circuitry known as a “logic gate.”

An August 25, 2015 University of Toronto news release by Patchen Barss, which originated the news item, provides an explanation of ‘logic gates’, photons, and the impact of this advance (Note: Links have been removed),

Logic gates perform operations on input data to create new outputs. In classical computers, logic gates take the form of diodes or transistors. But quantum computer components are made from individual atoms and subatomic particles. Information processing happens when the particles interact with one another according to the strange laws of quantum physics.

Light particles — known as “photons” — have many advantages in quantum computing, but it is notoriously difficult to get them to interact with one another in useful ways. This experiment demonstrates how to create such interactions.

“We’ve seen the effect of a single particle of light on another optical beam,” said Canadian Institute for Advanced Research (CIFAR) Senior Fellow Aephraim Steinberg, one of the paper’s authors and a researcher at U of T’s Centre for Quantum Information & Quantum Computing. “Normally light beams pass through each other with no effect at all. To build technologies like optical quantum computers, you want your beams to talk to one another. That’s never been done before using a single photon.”

The interaction was a two-step process. The researchers shot a single photon at rubidium atoms that they had cooled to a millionth of a degree above absolute zero. The photons became “entangled” with the atoms, which affected the way the rubidium interacted with a separate optical beam. The photon changes the atoms’ refractive index, which caused a tiny but measurable “phase shift” in the beam.

This process could be used as an all-optical quantum logic gate, allowing for inputs, information-processing and outputs.

“Quantum logic gates are the most obvious application of this advance,” said Steinberg. “But being able to see these interactions is the starting page of an entirely new field of optics. Most of what light does is so well understood that you wouldn’t think of it as a field of modern research. But two big exceptions are, “What happens when you deal with light one particle at a time?’ and “What happens when there are media like our cold atoms that allow different light beams to interact with each other?’”

Both questions have been studied, he says, but never together until now.

Here’s a link to and citation for the paper,

Observation of the nonlinear phase shift due to single post-selected photons by Amir Feizpour, Matin Hallaji, Greg Dmochowski, & Aephraim M. Steinberg. Nature Physics (2015) doi:10.1038/nphys3433 Published online 24 August 2015

This paper is behind a paywall.

University of Toronto researchers combine 2 different materials for new hyper-efficient, light-emitting, hybrid crystal

The Sargent Group at the University of Toronto has been quite active with regard to LEDs (light-emitting diodes) and with quantum dots. Their latest work is announced in a July 16, 2015 news item on Nanotechnology Now (Note: I had to include the ‘oatmeal cookie and chocolate chips’ analogy in the first paragraph as it’s referred to subsequently),

It’s snack time: you have a plain oatmeal cookie, and a pile of chocolate chips. Both are delicious on their own, but if you can find a way to combine them smoothly, you get the best of both worlds.

Researchers in The Edward S. Rogers Sr. Department of Electrical & Computer Engineering [University of Toronto] used this insight to invent something totally new: they’ve combined two promising solar cell materials together for the first time, creating a new platform for LED technology.

The team designed a way to embed strongly luminescent nanoparticles called colloidal quantum dots (the chocolate chips) into perovskite (the oatmeal cookie). Perovskites are a family of materials that can be easily manufactured from solution, and that allow electrons to move swiftly through them with minimal loss or capture by defects.

A July 15, 2015 University of Toronto news release (also on EurekAlert), which originated the news item, reveals more about the research (Note: A link has been removed),

“It’s a pretty novel idea to blend together these two optoelectronic materials, both of which are gaining a lot of traction,” says Xiwen Gong, one of the study’s lead authors and a PhD candidate working with Professor Ted Sargent. “We wanted to take advantage of the benefits of both by combining them seamlessly in a solid-state matrix.”

The result is a black crystal that relies on the perovskite matrix to ‘funnel’ electrons into the quantum dots, which are extremely efficient at converting electricity to light. Hyper-efficient LED technologies could enable applications from the visible-light LED bulbs in every home, to new displays, to gesture recognition using near-infrared wavelengths.

“When you try to jam two different crystals together, they often form separate phases without blending smoothly into each other,” says Dr. Riccardo Comin, a post-doctoral fellow in the Sargent Group. “We had to design a new strategy to convince these two components to forget about their differences and to rather intermix into forming a unique crystalline entity.”

The main challenge was making the orientation of the two crystal structures line up, called heteroexpitaxy. To achieve heteroepitaxy, Gong, Comin and their team engineered a way to connect the atomic ‘ends’ of the two crystalline structures so that they aligned smoothly, without defects forming at the seams. “We started by building a nano-scale scaffolding ‘shell’ around the quantum dots in solution, then grew the perovskite crystal around that shell so the two faces aligned,” explained coauthor Dr. Zhijun Ning, who contributed to the work while a post-doctoral fellow at UofT and is now a faculty member at ShanghaiTech.

The resulting heterogeneous material is the basis for a new family of highly energy-efficient near-infrared LEDs. Infrared LEDs can be harnessed for improved night-vision technology, to better biomedical imaging, to high-speed telecommunications.

Combining the two materials in this way also solves the problem of self-absorption, which occurs when a substance partly re-absorbs the same spectrum of energy that it emits, with a net efficiency loss. “These dots in perovskite don’t suffer reabsorption, because the emission of the dots doesn’t overlap with the absorption spectrum of the perovskite,” explains Comin.

Gong, Comin and the team deliberately designed their material to be compatible with solution-processing, so it could be readily integrated with the most inexpensive and commercially practical ways of manufacturing solar film and devices. Their next step is to build and test the hardware to capitalize on the concept they have proven with this work.

“We’re going to build the LED device and try to beat the record power efficiency reported in the literature,” says Gong.

I see that Sargent’s work is still associated with and supported by Saudi Arabia, from the news release,

This work was supported by the Ontario Research Fund Research Excellence Program, the Natural Sciences and Engineering Research Council of Canada (NSERC), and the King Abdullah University of Science & Technology (KAUST).

Here’s a link to and a citation for the paper,

Quantum-dot-in-perovskite solids by Zhijun Ning, Xiwen Gong, Riccardo Comin, Grant Walters, Fengjia Fan, Oleksandr Voznyy, Emre Yassitepe, Andrei Buin, Sjoerd Hoogland, & Edward H. Sargent. Nature 523, 324–328 (16 July 2015) doi:10.1038/nature14563 Published online 15 July 2015

This paper is behind a paywall.

Finally, the researchers have made a .gif of their hybrid crystal available.

A glowing quantum dot seamlessly integrated into a perovskite crystal matrix (Image: Ella Marushchenko). Courtesy: University of Toronto

A glowing quantum dot seamlessly integrated into a perovskite crystal matrix (Image: Ella Marushchenko). Courtesy: University of Toronto

ETA July 17, 2015:

Dexter Johnson provides some additional insight into the work in his July 16, 2015 posting on the Nanoclast blog (on the Institute for Electrical and Electronics Engineers website), Note:  Links have been removed,

Ted Sargent at the University of Toronto has built a reputation over the years as being a prominent advocate for the use of quantum dots in photovoltaics. Sargent has even penned a piece for IEEE Spectrum covering the topic, and this blog has covered his record breaking efforts at boosting the conversion efficiency of quantum dot-based photovoltaics a few times.

Earlier this year, however, Sargent started to take an interest in the hot material that has the photovoltaics community buzzing: perovskite. …

Canadian researchers develop test for exposure to nanoparticles*

The Canadian Broadcasting Corporation’s online news features a May 21, 2014 article by Emily Chung regarding research from the University of Toronto that may enable a simple skin test for determining nanoparticle exposure,

Canadian researchers have developed the first test for exposure to nanoparticles — new chemical technology found in a huge range of consumer products — that could potentially be used on humans.

Warren Chan, a University of Toronto [U of T] chemistry professor, and his team developed the skin test after noticing that some mice changed colour and others became fluorescent (that is, they glowed when light of certain colours were shone on them) after being exposed to increasing levels of different kinds of nanoparticles. The mice were being used in research to develop cancer treatments involving nanoparticles.

There is some evidence that certain types and levels of exposure may be harmful to human health. But until now, it has been hard to link exposure to health effects, partly due to the challenge of measuring exposure.

“There’s no way to determine how much [sic] nanoparticles you’ve been exposed to,” said Chan in an interview with CBCNews.ca.

There was one way to measure nanoparticle exposure in mice —  but it required the animals to be dead. At that point, they would be cut open and tests could be run on organs such as the liver and spleen where nanoparticles accumulate.

A May 14, 2014 article by Nancy Owano on phys.org provides more details (Note: Links have been removed),

They [researchers] found that different nanoparticles are visible through the skin under ambient or UV light. They found that after intravenous injection of fluorescent nanoparticles, they accumulate and can be observed through the skin. They also found that the concentration of these nanoparticles can be directly correlated to the injected dose and their accumulations in other organs.

In their discussion over selecting nanoparticles used in mouse skin, they said, “Gold nanoparticles are commonly used in molecular diagnostics and drug delivery applications. These nanomaterials were selected for our initial studies as they are easily synthesized, have a distinct ruby color and can be quantified by inductively coupled plasma atomic emission spectroscopy (ICP-AES).”

Work involved in the study included designing and performing experiments, pathological analysis, and data analysis. Their discovery could be used to better predict how nanoparticles behave in the body.

Here’s a link to and a citation for the paper,

Nanoparticle exposure in animals can be visualized in the skin and analysed via skin biopsy by Edward A. Sykes, Qin Dai, Kim M. Tsoi, David M. Hwang & Warren C. W. Chan. Nature Communications 5, Article number: 3796 doi:10.1038/ncomms4796 Published 13 May 2014

This paper is behind a paywall.

* Posting’s head changed from ‘Canadians and exposure to nanoparticles; to the more descriptive ‘Canadian researchers develop test for exposure to nanoparticles’., May 27, 2014.