Tag Archives: University of Montreal

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

*Redundant ‘and’ removed on July 19, 2018.

Shooting drugs to an infection site with a slingshot

It seems as if I’ve been writing up nanomedicine research a lot lately, so I would have avoided this piece. However, since I do try to cover Canadian nanotechnology regardless of the topic and this work features researchers from l’Université de Montréal (Québec, Canada), here’s one of the latest innovations in the field of nanomedicine. (I have some additional comments about the nano scene in Canada and one major issue concerning nanomedicine at the end of this posting.) From a May 8, 2017 news item on ScienceDaily,

An international team of researchers from the University of Rome Tor Vergata and the University of Montreal has reported, in a paper published this week in Nature Communications, the design and synthesis of a nanoscale molecular slingshot made of DNA that is 20,000 times smaller than a human hair. This molecular slingshot could “shoot” and deliver drugs at precise locations in the human body once triggered by specific disease markers.

A May 8, 2017 University of Montreal news release (also on EurekAlert), which originated the news item, delves further into the research (Note: A link has been removed),

The molecular slingshot is only a few nanometres long and is composed of a synthetic DNA strand that can load a drug and then effectively act as the rubber band of the slingshot. The two ends of this DNA “rubber band” contain two anchoring moieties that can specifically stick to a target antibody, a Y-shaped protein expressed by the body in response to different pathogens such as bacteria and viruses. When the anchoring moieties of the slingshot recognize and bind to the arms of the target antibody the DNA “rubber band” is stretched and the loaded drug is released.

“One impressive feature about this molecular slingshot,” says Francesco Ricci, Associate Professor of Chemistry at the University of Rome Tor Vergata, “is that it can only be triggered by the specific antibody recognizing the anchoring tags of the DNA ‘rubber band’. By simply changing these tags, one can thus program the slingshot to release a drug in response to a variety of specific antibodies. Since different antibodies are markers of different diseases, this could become a very specific weapon in the clinician’s hands.”

“Another great property of our slingshot,” adds Alexis Vallée-Bélisle, Assistant Professor in the Department of Chemistry at the University of Montreal, “is its high versatility. For example, until now we have demonstrated the working principle of the slingshot using three different trigger antibodies, including an HIV antibody, and employing nucleic acids as model drugs. But thanks to the high programmability of DNA chemistry, one can now design the DNA slingshot to ‘shoot’ a wide range of threrapeutic molecules.”

“Designing this molecular slingshot was a great challenge,” says Simona Ranallo, a postdoctoral researcher in Ricci’s team and principal author of the new study. “It required a long series of experiments to find the optimal design, which keeps the drug loaded in ‘rubber band’ in the absence of the antibody, without affecting too much its shooting efficiency once the antibody triggers the slingshot.”

The group of researchers is now eager to adapt the slingshot for the delivery of clinically relevant drugs, and to demonstrate its clinical efficiency. [emphasis mine] “We envision that similar molecular slingshots may be used in the near future to deliver drugs to specific locations in the body. This would drastically improve the efficiency of drugs as well as decrease their toxic secondary effects,” concludes Ricci.

Here’s a link to and a citation for the paper,

Antibody-powered nucleic acid release using a DNA-based nanomachine by Simona Ranallo, Carl Prévost-Tremblay, Andrea Idili, Alexis Vallée-Bélisle, & Francesco Ricci. Nature Communications 8, Article number: 15150 (2017) doi:10.1038/ncomms15150 Published online: 08 May 2017

This is an open access paper.

A couple of comments

The Canadian nanotechnology scene is pretty much centered in Alberta and Québec. The two provinces have invested a fair amount of money in their efforts. Despite the fact that the province of Alberta also hosts the federal government’s National Institute of Nanotechnology, it seems that the province of Québec is the one making the most progress in its various ‘nano’ fields of endeavour. Another province that should be mentioned with regard to its ‘nano’ efforts is Ontario. As far as I can tell, nanotechnology there doesn’t enjoy the same level of provincial funding support as the other two but there is some important work coming out of Ontario.

My other comment has to do with nanomedicine. While it is an exciting field, there is a tendency toward a certain hyperbole. For anyone who got excited about the ‘slingshot’, don’t forget this hasn’t been tested on any conditions close to the conditions found in a human body nor have they even used, “... clinically relevant drugs,  … .”  It’s also useful to know that less than 1% of the drugs used in nanoparticle-delivery systems make their way to the affected site (from an April 27, 2016 posting about research investigating the effectiveness of nanoparticle-based drug delivery systems). By the way, it was a researcher at the University of Toronto (Ontario, Canada) who first noted this phenomenon after a meta-analysis of the research,

More generally, the authors argue that, in order to increase nanoparticle delivery efficiency, a systematic and coordinated long-term strategy is necessary. To build a strong foundation for the field of cancer nanomedicine, researchers will need to understand a lot more about the interactions between nanoparticles and the body’s various organs than they do today. …

It’s not clear from the news release, the paper, or the May 8, 2017 article by Sherry Noik for the Canadian Broadcasting Corporation’s News Online website, how this proposed solution would be administered but presumably the same factors which affect other nano-based drug deliveries could affect this new one,

Scientists have for many years been working on improving therapies like chemo and radiation on that score, but most efforts have focused on modifying the chemistry rather than altering the delivery of the drug.

“It’s all about tuning the concentration of the drug optimally in the body: high concentration where you want it to be active, and low concentration where you don’t want to affect other healthy parts,” says Prof. Alexis Vallée-Bélisle of the University of Montreal, co-author of the report published this week in Nature Communications.

“If you can increase the concentration of that drug at the specific location, that drug will be more efficient,” he told CBC News in an interview.

‘Like a weapon’

Restricting the movement of the drug also reduces potentially harmful secondary effects on other parts of the body — for instance, the hair loss that can result from toxic cancer treatments, or the loss of so-called good bacteria due to antibiotic use.

The idea of the slingshot is to home in on the target cells at a molecular level.

The two ends of the strand anchor themselves to the antibody, stretching the strand taut and catapulting the drug to its target.

“Imagine our slingshot like a weapon, and this weapon is being used by our own antibody,” said Vallée-Bélisle, who heads the Laboratory of Biosensors & Nanomachines at U of M. “We design a specific weapon targeting, for example, HIV. We provide the weapon in the body with the bullet — the drug. If the right solider is there, the soldier can use the weapon and shoot the problem.”

Equally important: if the wrong soldier is present, the weapon won’t be deployed.

So rather than delay treatment for an unidentified infection that could be either viral or bacterial, a patient could receive the medication for both and their body would only use the one it needed.

Getting back to my commentary, how does the drug get to its target? Through the bloodstream?  Does it get passed through various organs? How do we increase the amount of medication (in nano-based drug delivery systems) reaching affected areas from less than 1%?

The researchers deserve to be congratulated for this work and given much encouragement and thanks as they grapple with the questions I’ve posed and with all of the questions I don’t know how to ask.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

Taking DNA beyond genetics with living computers and nanobots

You might want to keep a salt shaker with you while reading a June 7, 2016 essay by Matteo Palma (Queen Mary’s University of London) about nanotechnology and DNA on The Conversation website (h/t June 7, 2016 news item on Nanowerk).

This is not a ‘hype’ piece as Palma backs every claim with links to the research while providing a good overview of some very exciting work but the mood is a bit euphoric so you may want to keep the earlier mentioned salt shaker nearby.

Palma offers a very nice beginner introduction especially helpful for someone who only half-remembers their high school biology (from the June 7, 2016 essay)

DNA is one of the most amazing molecules in nature, providing a way to carry the instructions needed to create almost any lifeform on Earth in a microscopic package. Now scientists are finding ways to push DNA even further, using it not just to store information but to create physical components in a range of biological machines.

Deoxyribonucleic acid or “DNA” carries the genetic information that we, and all living organisms, use to function. It typically comes in the form of the famous double-helix shape, made up of two single-stranded DNA molecules folded into a spiral. Each of these is made up of a series of four different types of molecular component: adenine (A), guanine (G), thymine (T), and cytosine (C).

Genes are made up from different sequences of these building block components, and the order in which they appear in a strand of DNA is what encodes genetic information. But by precisely designing different A,G,T and C sequences, scientists have recently been able to develop new ways of folding DNA into different origami shapes, beyond the conventional double helix.

This approach has opened up new possibilities of using DNA beyond its genetic and biological purpose, turning it into a Lego-like material for building objects that are just a few billionths of a metre in diameter (nanoscale). DNA-based materials are now being used for a variety of applications, ranging from templates for electronic nano-devices, to ways of precisely carrying drugs to diseased cells.

He highlights some Canadian work,

Designing electronic devices that are just nanometres in size opens up all sorts of possible applications but makes it harder to spot defects. As a way of dealing with this, researchers at the University of Montreal have used DNA to create ultrasensitive nanoscale thermometers that could help find minuscule hotspots in nanodevices (which would indicate a defect). They could also be used to monitor the temperature inside living cells.

The nanothermometers are made using loops of DNA that act as switches, folding or unfolding in response to temperature changes. This movement can be detected by attaching optical probes to the DNA. The researchers now want to build these nanothermometers into larger DNA devices that can work inside the human body.

He also mentions the nanobots that will heal your body (according to many works of fiction),

Researchers at Harvard Medical School have used DNA to design and build a nanosized robot that acts as a drug delivery vehicle to target specific cells. The nanorobot comes in the form of an open barrel made of DNA, whose two halves are connected by a hinge held shut by special DNA handles. These handles can recognise combinations of specific proteins present on the surface of cells, including ones associated with diseases.

When the robot comes into contact with the right cells, it opens the container and delivers its cargo. When applied to a mixture of healthy and cancerous human blood cells, these robots showed the ability to target and kill half of the cancer cells, while the healthy cells were left unharmed.

Palma is describing a very exciting development and there are many teams worldwide working on ways to make drugs more effective and less side effect-ridden. However there does seem to be a bit of a problem with targeted drug delivery as noted in my April 27, 2016 posting,

According to an April 27, 2016 news item on Nanowerk researchers at the University of Toronto (Canada) along with their collaborators in the US (Harvard Medical School) and Japan (University of Tokyo) have determined that less than 1% of nanoparticle-based drugs reach their intended destination …

Less than 1%? Admittedly, nanoparticles are not the same as nanobots but the problem is in the delivery, from my April 27, 2016 posting,

… the authors argue that, in order to increase nanoparticle delivery efficiency, a systematic and coordinated long-term strategy is necessary. To build a strong foundation for the field of cancer nanomedicine, researchers will need to understand a lot more about the interactions between nanoparticles and the body’s various organs than they do today. …

I imagine nanobots will suffer a similar fate since the actual delivery mechanism to a targeted cell is still a mystery.

I quite enjoyed Palma’s essay and appreciated the links he provided. My only proviso, keep a salt shaker nearby. That rosy future is going take a while to get here.

Nanotechnology and cybersecurity risks

Gregory Carpenter has written a gripping (albeit somewhat exaggerated) piece for Signal, a publication of the  Armed Forces Communications and Electronics Association (AFCEA) about cybersecurity issues and  nanomedicine endeavours. From Carpenter’s Jan. 1, 2016 article titled, When Lifesaving Technology Can Kill; The Cyber Edge,

The exciting advent of nanotechnology that has inspired disruptive and lifesaving medical advances is plagued by cybersecurity issues that could result in the deaths of people that these very same breakthroughs seek to heal. Unfortunately, nanorobotic technology has suffered from the same security oversights that afflict most other research and development programs.

Nanorobots, or small machines [or nanobots[, are vulnerable to exploitation just like other devices.

At the moment, the issue of cybersecurity exploitation is secondary to making nanobots, or nanorobots, dependably functional. As far as I’m aware, there is no such nanobot. Even nanoparticles meant to function as packages for drug delivery have not been perfected (see one of the controversies with nanomedicine drug delivery described in my Nov. 26, 2015 posting).

That said, Carpenter’s point about cybersecurity is well taken since security features are often overlooked in new technology. For example, automated banking machines (ABMs) had woefully poor (inadequate, almost nonexistent) security when they were first introduced.

Carpenter outlines some of the problems that could occur, assuming some of the latest research could be reliably  brought to market,

The U.S. military has joined the fray of nanorobotic experimentation, embarking on revolutionary research that could lead to a range of discoveries, from unraveling the secrets of how brains function to figuring out how to permanently purge bad memories. Academia is making amazing advances as well. Harnessing progress by Harvard scientists to move nanorobots within humans, researchers at the University of Montreal, Polytechnique Montreal and Centre Hospitalier Universitaire Sainte-Justine are using mobile nanoparticles inside the human brain to open the blood-brain barrier, which protects the brain from toxins found in the circulatory system.

A different type of technology presents a risk similar to the nanoparticles scenario. A DARPA-funded program known as Restoring Active Memory (RAM) addresses post-traumatic stress disorder, attempting to overcome memory deficits by developing neuroprosthetics that bridge gaps in an injured brain. In short, scientists can wipe out a traumatic memory, and they hope to insert a new one—one the person has never actually experienced. Someone could relish the memory of a stroll along the French Riviera rather than a terrible firefight, even if he or she has never visited Europe.

As an individual receives a disruptive memory, a cyber criminal could manage to hack the controls. Breaches of the brain could become a reality, putting humans at risk of becoming zombie hosts [emphasis mine] for future virus deployments. …

At this point, the ‘zombie’ scenario Carpenter suggests seems a bit over-the-top but it does hearken to the roots of the zombie myth where the undead aren’t mindlessly searching for brains but are humans whose wills have been overcome. Mike Mariani in an Oct. 28, 2015 article for The Atlantic has presented a thought-provoking history of zombies,

… the zombie myth is far older and more rooted in history than the blinkered arc of American pop culture suggests. It first appeared in Haiti in the 17th and 18th centuries, when the country was known as Saint-Domingue and ruled by France, which hauled in African slaves to work on sugar plantations. Slavery in Saint-Domingue under the French was extremely brutal: Half of the slaves brought in from Africa were worked to death within a few years, which only led to the capture and import of more. In the hundreds of years since, the zombie myth has been widely appropriated by American pop culture in a way that whitewashes its origins—and turns the undead into a platform for escapist fantasy.

The original brains-eating fiend was a slave not to the flesh of others but to his own. The zombie archetype, as it appeared in Haiti and mirrored the inhumanity that existed there from 1625 to around 1800, was a projection of the African slaves’ relentless misery and subjugation. Haitian slaves believed that dying would release them back to lan guinée, literally Guinea, or Africa in general, a kind of afterlife where they could be free. Though suicide was common among slaves, those who took their own lives wouldn’t be allowed to return to lan guinée. Instead, they’d be condemned to skulk the Hispaniola plantations for eternity, an undead slave at once denied their own bodies and yet trapped inside them—a soulless zombie.

I recommend reading Mariani’s article although I do have one nit to pick. I can’t find a reference to brain-eating zombies until George Romero’s introduction of the concept in his movies. This Zombie Wikipedia entry seems to be in agreement with my understanding (if I’m wrong, please do let me know and, if possible, provide a link to the corrective text).

Getting back to Carpenter and cybersecurity with regard to nanomedicine, while his scenarios may seem a trifle extreme it’s precisely the kind of thinking you need when attempting to anticipate problems. I do wish he’d made clear that the technology still has a ways to go.

Reversing Parkinson’s type symptoms in rats

Indian scientists have developed a technique for delivering drugs that could reverse Parkinson-like symptoms according to an April 22, 2015 news item on Nanowerk (Note: A link has been removed),

As baby boomers age, the number of people diagnosed with Parkinson’s disease is expected to increase. Patients who develop this disease usually start experiencing symptoms around age 60 or older. Currently, there’s no cure, but scientists are reporting a novel approach that reversed Parkinson’s-like symptoms in rats.

Their results, published in the journal ACS Nano (“Trans-Blood Brain Barrier Delivery of Dopamine-Loaded Nanoparticles Reverses Functional Deficits in Parkinsonian Rats”), could one day lead to a new therapy for human patients.

An April 22, 2015 American Chemical Society press pac news release (also on EurekAlert), which originated the news item, describes the problem the researchers were solving (Note: Links have been removed),

Rajnish Kumar Chaturvedi, Kavita Seth, Kailash Chand Gupta and colleagues from the CSIR-Indian Institute of Toxicology Research note that among other issues, people with Parkinson’s lack dopamine in the brain. Dopamine is a chemical messenger that helps nerve cells communicate with each other and is involved in normal body movements. Reduced levels cause the shaking and mobility problems associated with Parkinson’s. Symptoms can be relieved in animal models of the disease by infusing the compound into their brains. But researchers haven’t yet figured out how to safely deliver dopamine directly to the human brain, which is protected by something called the blood-brain barrier that keeps out pathogens, as well as many medicines. Chaturvedi and Gupta’s team wanted to find a way to overcome this challenge.

The researchers packaged dopamine in biodegradable nanoparticles that have been used to deliver other therapeutic drugs to the brain. The resulting nanoparticles successfully crossed the blood-brain barrier in rats, released its dopamine payload over several days and reversed the rodents’ movement problems without causing side effects.

The authors acknowledge funding from the Indian Department of Science and Technology as Woman Scientist and Ramanna Fellow Grant, and the Council of Scientific and Industrial Research (India).

Here’s a link to and citation for the paper,

Trans-Blood Brain Barrier Delivery of Dopamine-Loaded Nanoparticles Reverses Functional Deficits in Parkinsonian Rats by Richa Pahuja, Kavita Seth, Anshi Shukla, Rajendra Kumar Shukla, Priyanka Bhatnagar, Lalit Kumar Singh Chauhan, Prem Narain Saxena, Jharna Arun, Bhushan Pradosh Chaudhari, Devendra Kumar Patel, Sheelendra Pratap Singh, Rakesh Shukla, Vinay Kumar Khanna, Pradeep Kumar, Rajnish Kumar Chaturvedi, and Kailash Chand Gupta. ACS Nano, Article ASAP DOI: 10.1021/nn506408v Publication Date (Web): March 31, 2015
Copyright © 2015 American Chemical Society

This paper is open access.

Another recent example of breaching the blood-brain barrier, coincidentally, in rats, can be found in my Dec. 24, 2014 titled: Gelatin nanoparticles for drug delivery after a stroke. Scientists are also trying to figure out the the blood-brain barrier operates in the first place as per this April 22, 2015 University of Pennsylvania news release on EurekAlert titled, Penn Vet, Montreal and McGill researchers show how blood-brain barrier is maintained (University of Pennsylvania School of Veterinary Medicine, University of Montreal or Université de Montréal, and McGill University). You can find out more about CSIR-Indian Institute of Toxicology Research here.

Faster, cheaper, and just as good—nanoscale device for measuring cancer drug methotrexate

Lots of cancer drugs can be toxic if the dosage is too high for individual metabolisms, which can vary greatly in their ability to break drugs down. The University of Montréal (Université de Montréal) has announced a device that could help greatly in making the technology to determine toxicity in the bloodstream faster and cheaper according to an Oct. 27, 2014 news item on Nanowerk,

In less than a minute, a miniature device developed at the University of Montreal can measure a patient’s blood for methotrexate, a commonly used but potentially toxic cancer drug. Just as accurate and ten times less expensive than equipment currently used in hospitals, this nanoscale device has an optical system that can rapidly gauge the optimal dose of methotrexate a patient needs, while minimizing the drug’s adverse effects. The research was led by Jean-François Masson and Joelle Pelletier of the university’s Department of Chemistry.

An Oct. 27, 2014 University of Montréal news release, which originated the news item, provides more specifics about the cancer drug being monitored and the research that led to the new device,

Methotrexate has been used for many years to treat certain cancers, among other diseases, because of its ability to block the enzyme dihydrofolate reductase (DHFR). This enzyme is active in the synthesis of DNA precursors and thus promotes the proliferation of cancer cells. “While effective, methotrexate is also highly toxic and can damage the healthy cells of patients, hence the importance of closely monitoring the drug’s concentration in the serum of treated individuals to adjust the dosage,” Masson explained.

Until now, monitoring has been done in hospitals with a device using fluorescent bioassays to measure light polarization produced by a drug sample. “The operation of the current device is based on a cumbersome, expensive platform that requires experienced personnel because of the many samples that need to be manipulated,” Masson said.

Six years ago, Joelle Pelletier, a specialist of the DHFR enzyme, and Jean-François Masson, an expert in biomedical instrument design, investigated how to simplify the measurement of methotrexate concentration in patients.

Gold nanoparticles on the surface of the receptacle change the colour of the light detected by the instrument. The detected colour reflects the exact concentration of the drug in the blood sample. In the course of their research, they developed and manufactured a miniaturized device that works by surface plasmon resonance. Roughly, it measures the concentration of serum (or blood) methotrexate through gold nanoparticles on the surface of a receptacle. In “competing” with methotrexate to block the enzyme, the gold nanoparticles change the colour of the light detected by the instrument. And the colour of the light detected reflects the exact concentration of the drug in the blood sample.

The accuracy of the measurements taken by the new device were compared with those produced by equipment used at the Maisonneuve-Rosemont Hospital in Montreal. “Testing was conclusive: not only were the measurements as accurate, but our device took less than 60 seconds to produce results, compared to 30 minutes for current devices,” Masson said. Moreover, the comparative tests were performed by laboratory technicians who were not experienced with surface plasmon resonance and did not encounter major difficulties in operating the new equipment or obtaining the same conclusive results as Masson and his research team.

In addition to producing results in real time, the device designed by Masson is small and portable and requires little manipulation of samples. “In the near future, we can foresee the device in doctors’ offices or even at the bedside, where patients would receive individualized and optimal doses while minimizing the risk of complications,” Masson said. Another benefit, and a considerable one: “While traditional equipment requires an investment of around $100,000, the new mobile device would likely cost ten times less, around $10,000.”

For those who prefer to read the material in French here’s a link to ‘le 27 Octobre 2014 communiqué de nouvelles‘.

Here’s a prototype of the device,

Les nanoparticules d’or situées à la surface de la languette réceptrice modifient la couleur de la lumière détectée par l’instrument. La couleur captée reflète la concentration exacte du médicament contenu dans l’échantillon sanguin. Courtesy  Université de Montréal

Les nanoparticules d’or situées à la surface de la languette réceptrice modifient la couleur de la lumière détectée par l’instrument. La couleur captée reflète la concentration exacte du médicament contenu dans l’échantillon sanguin. Courtesy Université de Montréal

There is no indication as to when this might come to market, in English  or in French.

The evolution of molecules as observed with femtosecond stimulated Raman spectroscopy

A July 3, 2014 news item on Azonano features some recent research from the Université de Montréal (amongst other institutions),

Scientists don’t fully understand how ‘plastic’ solar panels work, which complicates the improvement of their cost efficiency, thereby blocking the wider use of the technology. However, researchers at the University of Montreal, the Science and Technology Facilities Council, Imperial College London and the University of Cyprus have determined how light beams excite the chemicals in solar panels, enabling them to produce charge.

A July 2, 2014 University of Montreal news release, which originated the news item, provides a fascinating description of the ultrafast laser process used to make the observations,

 “We used femtosecond stimulated Raman spectroscopy,” explained Tony Parker of the Science and Technology Facilities Council’s Central Laser Facility. “Femtosecond stimulated Raman spectroscopy is an advanced ultrafast laser technique that provides details on how chemical bonds change during extremely fast chemical reactions. The laser provides information on the vibration of the molecules as they interact with the pulses of laser light.” Extremely complicated calculations on these vibrations enabled the scientists to ascertain how the molecules were evolving. Firstly, they found that after the electron moves away from the positive centre, the rapid molecular rearrangement must be prompt and resemble the final products within around 300 femtoseconds (0.0000000000003 s). A femtosecond is a quadrillionth of a second – a femtosecond is to a second as a second is to 3.7 million years. This promptness and speed enhances and helps maintain charge separation.  Secondly, the researchers noted that any ongoing relaxation and molecular reorganisation processes following this initial charge separation, as visualised using the FSRS method, should be extremely small.

As for why the researchers’ curiosity was stimulated (from the news release),

The researchers have been investigating the fundamental beginnings of the reactions that take place that underpin solar energy conversion devices, studying the new brand of photovoltaic diodes that are based on blends of polymeric semiconductors and fullerene derivatives. Polymers are large molecules made up of many smaller molecules of the same kind – consisting of so-called ‘organic’ building blocks because they are composed of atoms that also compose molecules for life (carbon, nitrogen, sulphur). A fullerene is a molecule in the shape of a football, made of carbon. “In these and other devices, the absorption of light fuels the formation of an electron and a positive charged species. To ultimately provide electricity, these two attractive species must separate and the electron must move away. If the electron is not able to move away fast enough then the positive and negative charges simple recombine and effectively nothing changes. The overall efficiency of solar devices compares how much recombines and how much separates,” explained Sophia Hayes of the University of Cyprus, last author of the study.

… “Our findings open avenues for future research into understanding the differences between material systems that actually produce efficient solar cells and systems that should as efficient but in fact do not perform as well. A greater understanding of what works and what doesn’t will obviously enable better solar panels to be designed in the future,” said the University of Montreal’s Carlos Silva, who was senior author of the study.

Here’s a link to and a citation for the paper,

Direct observation of ultrafast long-range charge separation at polymer–fullerene heterojunctions by Françoise Provencher, Nicolas Bérubé, Anthony W. Parker, Gregory M. Greetham, Michael Towrie, Christoph Hellmann, Michel Côté, Natalie Stingelin, Carlos Silva & Sophia C. Hayes. Nature Communications 5, Article number: 4288 doi:10.1038/ncomms5288 Published 01 July 2014

This article is behind a paywall but there is a free preview available vie ReadCube Access.

Canada’s ‘nano’satellites to gaze upon luminous stars

The launch (from Yasny, Russia) of two car battery-sized satellites happened on June 18, 2014 at 15:11:11 Eastern Daylight Time according to a June 18, 2014 University of Montreal (Université de Montréal) news release (also on EurekAlert).

Together, the satellites are known as the BRITE-Constellation, standing for BRIght Target Explorer. “BRITE-Constellation will monitor for long stretches of time the brightness and colour variations of most of the brightest stars visible to the eye in the night sky. These stars include some of the most massive and luminous stars in the Galaxy, many of which are precursors to supernova explosions. This project will contribute to unprecedented advances in our understanding of such stars and the life cycles of the current and future generations of stars,” said Professor Moffat [Anthony Moffat, of the University of Montreal and the Centre for Research in Astrophysics of Quebec], who is the scientific mission lead for the Canadian contribution to BRITE and current chair of the international executive science team.

Here’s what the satellites (BRITE-Constellatio) are looking for (from the news release),

Luminous stars dominate the ecology of the Universe. “During their relatively brief lives, massive luminous stars gradually eject enriched gas into the interstellar medium, adding heavy elements critical to the formation of future stars, terrestrial planets and organics. In their spectacular deaths as supernova explosions, massive stars violently inject even more crucial ingredients into the mix. The first generation of massive stars in the history of the Universe may have laid the imprint for all future stellar history,” Moffat explained. “Yet, massive stars – rapidly spinning and with radiation fields whose pressure resists gravity itself – are arguably the least understood, despite being the brightest members of the familiar constellations of the night sky.” Other less-massive stars, including stars similar to our own Sun, also contribute to the ecology of the Universe, but only at the end of their lives, when they brighten by factors of a thousand and shed off their tenuous outer layers.

BRITE-Constellation is both a multinational effort and a Canadian bi-provincial effort,

BRITE-Constellation is in fact a multinational effort that relies on pioneering Canadian space technology and a partnership with Austrian and Polish space researchers – the three countries act as equal partners. Canada’s participation was made possible thanks to an investment of $4.07 million by the Canadian Space Agency. The two new Canadian satellites are joining two Austrian satellites and a Polish satellite already in orbit; the final Polish satellite will be launched in August [2014?].

All six satellites were designed by the University of Toronto Institute for Aerospace Studies – Space Flight Laboratory, who also built the Canadian pair. The satellites were in fact named “BRITE Toronto” and “BRITE Montreal” after the University of Toronto and the University of Montreal, who play a major role in the mission.  “BRITE-Constellation will exploit and enhance recent Canadian advances in precise attitude control that have opened up for space science  the domain of very low cost, miniature spacecraft, allowing a scientific return that otherwise would have had price tags 10 to 100 times higher,” Moffat said. “This will actually be the first network of satellites devoted to a fundamental problem in astrophysics.”

Is it my imagination or is there a lot more Canada/Canadian being included in news releases from the academic community these days? In fact, I made a similar comment in my June 10, 2014 posting about TRIUMF, Canada’s National Laboratory for Particle and Nuclear Physics where I noted we might not need to honk our own horns quite so loudly.

One final comment, ‘nano’satellites have been launched before as per my Aug. 6, 2012 posting,

The nanosatellites referred to in the Aug.2, 2012 news release on EurekALert aren’t strictly speaking nano since they are measured in inches and weigh approximately eight pounds. I guess by comparison with a standard-sized satellite, CINEMA, one of 11 CubeSats, seems nano-sized. From the news release,

Eleven tiny satellites called CubeSats will accompany a spy satellite into Earth orbit on Friday, Aug. 3, inaugurating a new type of inexpensive, modular nanosatellite designed to piggyback aboard other NASA missions. [emphasis mine]

One of the 11 will be CINEMA (CubeSat for Ions, Neutrals, Electrons, & MAgnetic fields), an 8-pound, shoebox-sized package which was built over a period of three years by 45 students from the University of California, Berkeley, Kyung Hee University in Korea, Imperial College London, Inter-American University of Puerto Rico, and University of Puerto Rico, Mayaguez.

This 2012 project had a very different focus from this Austrian-Canadian-Polish effort. From the University of Montreal news release,

The nanosatellites will be able to explore a wide range of astrophysical questions. “The constellation could detect exoplanetary transits around other stars, putting our own planetary system in context, or the pulsations of red giants, which will enable us to test and refine our models regarding the eventual fate of our Sun,” Moffatt explained.

Good luck!

Biosensing cocaine

Amusingly, the Feb. 13, 2013 news item on Nanowerk highlights the biosensing aspect of the work in its title,

New biosensing nanotechnology adopts natural mechanisms to detect molecules

(Nanowerk News) Since the beginning of time, living organisms have developed ingenious mechanisms to monitor their environment.

The Feb. 13, 2013 news release from the University of Montreal (Université de Montréal) takes a somewhat different tack by focusing on cocaine,

Detecting cocaine “naturally”

Since the beginning of time, living organisms have developed ingenious mechanisms to monitor their environment. As part of an international study, a team of researchers has adapted some of these natural mechanisms to detect specific molecules such as cocaine more accurately and quickly. Their work may greatly facilitate the rapid screening—less than five minutes—of many drugs, infectious diseases, and cancers.

Professor Alexis Vallée-Bélisle of the University of Montreal Department of Chemistry has worked with Professor Francesco Ricci of the University of Rome Tor Vergata and Professor Kevin W. Plaxco of the University of California at Santa Barbara to improve a new biosensing nanotechnology. The results of the study were recently published in the Journal of American Chemical Society (JACS).

The scientists have provided an interesting image to illustrate their work,

Artist's rendering: the research team used an existing cocaine biosensor (in green) and revised its design to react to a series of inhibitor molecules (in blue). They were able to adapt the biosensor to respond optimally even within a large concentration window. Courtesy: University of Montreal

Artist’s rendering: the research team used an existing cocaine biosensor (in green) and revised its design to react to a series of inhibitor molecules (in blue). They were able to adapt the biosensor to respond optimally even within a large concentration window. Courtesy: University of Montreal

The news release provides some insight into the current state of biosensing and what the research team was attempting to accomplish,

“Nature is a continuing source of inspiration for developing new technologies,” says Professor Francesco Ricci, senior author of the study. “Many scientists are currently working to develop biosensor technology to detect—directly in the bloodstream and in seconds—drug, disease, and cancer molecules.”

“The most recent rapid and easy-to-use biosensors developed by scientists to determine the levels of various molecules such as drugs and disease markers in the blood only do so when the molecule is present in a certain concentration, called the concentration window,” adds Professor Vallée-Bélisle. “Below or above this window, current biosensors lose much of their accuracy.”

To overcome this limitation, the international team looked at nature: “In cells, living organisms often use inhibitor or activator molecules to automatically program the sensitivity of their receptors (sensors), which are able to identify the precise amount of thousand of molecules in seconds,” explains Professor Vallée-Bélisle. “We therefore decided to adapt these inhibition, activation, and sequestration mechanisms to improve the efficiency of artificial biosensors.”

The researchers put their idea to the test by using an existing cocaine biosensor and revising its design so that it would respond to a series of inhibitor molecules. They were able to adapt the biosensor to respond optimally even with a large concentration window. “What is fascinating,” says Alessandro Porchetta, a doctoral student at the University of Rome, “is that we were successful in controlling the interactions of this system by mimicking mechanisms that occur naturally.”

“Besides the obvious applications in biosensor design, I think this work will pave the way for important applications related to the administration of cancer-targeting drugs, an area of increasing importance,” says Professor Kevin Plaxco. “The ability to accurately regulate biosensor or nanomachine’s activities will greatly increase their efficiency.”

The funders for this project are (from the news release),

… the Italian Ministry of Universities and Research (MIUR), the Bill & Melinda Gates Foundation Grand Challenges Explorations program, the European Commission Marie Curie Actions program, the U.S. National Institutes of Health, and the Fonds de recherche du Québec Nature et Technologies.

Here’s a citation and a link to the research paper,

Using Distal-Site Mutations and Allosteric Inhibition To Tune, Extend, and Narrow the Useful Dynamic Range of Aptamer-Based Sensors by Alessandro Porchetta, Alexis Vallée-Bélisle, Kevin W. Plaxco, and Francesco Ricci. J. Am. Chem. Soc., 2012, 134 (51), pp 20601–20604 DOI: 10.1021/ja310585e Publication Date (Web): December 6, 2012

Copyright © 2012 American Chemical Society

This article is behind a paywall.

One final note, Alexis Vallée-Bélisle has been mentioned here before in the context of a ‘Grand Challenges Canada programme’ (not the Bill and Melinda Gates ‘Grand Challenges’) announcement of several fundees  in my Nov. 22, 2012 posting. That funding appears to be for a difference project.