Tag Archives: University of Tokyo

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

*Redundant ‘and’ removed on July 19, 2018.

CRISPR-CAS9 and gold

As so often happens in the sciences, now that the initial euphoria has expended itself problems (and solutions) with CRISPR ((clustered regularly interspaced short palindromic repeats))-CAAS9 are being disclosed to those of us who are not experts. From an Oct. 3, 2017 article by Bob Yirka for phys.org,

A team of researchers from the University of California and the University of Tokyo has found a way to use the CRISPR gene editing technique that does not rely on a virus for delivery. In their paper published in the journal Nature Biomedical Engineering, the group describes the new technique, how well it works and improvements that need to be made to make it a viable gene editing tool.

CRISPR-Cas9 has been in the news a lot lately because it allows researchers to directly edit genes—either disabling unwanted parts or replacing them altogether. But despite many success stories, the technique still suffers from a major deficit that prevents it from being used as a true medical tool—it sometimes makes mistakes. Those mistakes can cause small or big problems for a host depending on what goes wrong. Prior research has suggested that the majority of mistakes are due to delivery problems, which means that a replacement for the virus part of the technique is required. In this new effort, the researchers report that they have discovered just a such a replacement, and it worked so well that it was able to repair a gene mutation in a Duchenne muscular dystrophy mouse model. The team has named the new technique CRISPR-Gold, because a gold nanoparticle was used to deliver the gene editing molecules instead of a virus.

An Oct. 2, 2017 article by Abby Olena for The Scientist lays out the CRISPR-CAS9 problems the scientists are trying to solve (Note: Links have been removed),

While promising, applications of CRISPR-Cas9 gene editing have so far been limited by the challenges of delivery—namely, how to get all the CRISPR parts to every cell that needs them. In a study published today (October 2) in Nature Biomedical Engineering, researchers have successfully repaired a mutation in the gene for dystrophin in a mouse model of Duchenne muscular dystrophy by injecting a vehicle they call CRISPR-Gold, which contains the Cas9 protein, guide RNA, and donor DNA, all wrapped around a tiny gold ball.

The authors have made “great progress in the gene editing area,” says Tufts University biomedical engineer Qiaobing Xu, who did not participate in the work but penned an accompanying commentary. Because their approach is nonviral, Xu explains, it will minimize the potential off-target effects that result from constant Cas9 activity, which occurs when users deliver the Cas9 template with a viral vector.

Duchenne muscular dystrophy is a degenerative disease of the muscles caused by a lack of the protein dystrophin. In about a third of patients, the gene for dystrophin has small deletions or single base mutations that render it nonfunctional, which makes this gene an excellent candidate for gene editing. Researchers have previously used viral delivery of CRISPR-Cas9 components to delete the mutated exon and achieve clinical improvements in mouse models of the disease.

“In this paper, we were actually able to correct [the gene for] dystrophin back to the wild-type sequence” via homology-directed repair (HDR), coauthor Niren Murthy, a drug delivery researcher at the University of California, Berkeley, tells The Scientist. “The other way of treating this is to do something called exon skipping, which is where you delete some of the exons and you can get dystrophin to be produced, but it’s not [as functional as] the wild-type protein.”

The research team created CRISPR-Gold by covering a central gold nanoparticle with DNA that they modified so it would stick to the particle. This gold-conjugated DNA bound the donor DNA needed for HDR, which the Cas9 protein and guide RNA bound to in turn. They coated the entire complex with a polymer that seems to trigger endocytosis and then facilitate escape of the Cas9 protein, guide RNA, and template DNA from endosomes within cells.

In order to do HDR, “you have to provide the cell [with] the Cas9 enzyme, guide RNA by which you target Cas9 to a particular part of the genome, and a big chunk of DNA, which will be used as a template to edit the mutant sequence to wild-type,” explains coauthor Irina Conboy, who studies tissue repair at the University of California, Berkeley. “They all have to be present at the same time and at the same place, so in our system you have a nanoparticle which simultaneously delivers all of those three key components in their active state.”

Olena’s article carries on to describe how the team created CRISPR-Gold and more.

Additional technical details are available in an Oct. 3, 2017 University of California at Berkeley news release by Brett Israel (also on EurekAlert), which originated the news item (Note: A link has been removed) ,

Scientists at the University of California, Berkeley, have engineered a new way to deliver CRISPR-Cas9 gene-editing technology inside cells and have demonstrated in mice that the technology can repair the mutation that causes Duchenne muscular dystrophy, a severe muscle-wasting disease. A new study shows that a single injection of CRISPR-Gold, as the new delivery system is called, into mice with Duchenne muscular dystrophy led to an 18-times-higher correction rate and a two-fold increase in a strength and agility test compared to control groups.

Diagram of CRISPR-Gold

CRISPR–Gold is composed of 15 nanometer gold nanoparticles that are conjugated to thiol-modified oligonucleotides (DNA-Thiol), which are hybridized with single-stranded donor DNA and subsequently complexed with Cas9 and encapsulated by a polymer that disrupts the endosome of the cell.

Since 2012, when study co-author Jennifer Doudna, a professor of molecular and cell biology and of chemistry at UC Berkeley, and colleague Emmanuelle Charpentier, of the Max Planck Institute for Infection Biology, repurposed the Cas9 protein to create a cheap, precise and easy-to-use gene editor, researchers have hoped that therapies based on CRISPR-Cas9 would one day revolutionize the treatment of genetic diseases. Yet developing treatments for genetic diseases remains a big challenge in medicine. This is because most genetic diseases can be cured only if the disease-causing gene mutation is corrected back to the normal sequence, and this is impossible to do with conventional therapeutics.

CRISPR/Cas9, however, can correct gene mutations by cutting the mutated DNA and triggering homology-directed DNA repair. However, strategies for safely delivering the necessary components (Cas9, guide RNA that directs Cas9 to a specific gene, and donor DNA) into cells need to be developed before the potential of CRISPR-Cas9-based therapeutics can be realized. A common technique to deliver CRISPR-Cas9 into cells employs viruses, but that technique has a number of complications. CRISPR-Gold does not need viruses.

In the new study, research lead by the laboratories of Berkeley bioengineering professors Niren Murthy and Irina Conboy demonstrated that their novel approach, called CRISPR-Gold because gold nanoparticles are a key component, can deliver Cas9 – the protein that binds and cuts DNA – along with guide RNA and donor DNA into the cells of a living organism to fix a gene mutation.

“CRISPR-Gold is the first example of a delivery vehicle that can deliver all of the CRISPR components needed to correct gene mutations, without the use of viruses,” Murthy said.

The study was published October 2 [2017] in the journal Nature Biomedical Engineering.

CRISPR-Gold repairs DNA mutations through a process called homology-directed repair. Scientists have struggled to develop homology-directed repair-based therapeutics because they require activity at the same place and time as Cas9 protein, an RNA guide that recognizes the mutation and donor DNA to correct the mutation.

To overcome these challenges, the Berkeley scientists invented a delivery vessel that binds all of these components together, and then releases them when the vessel is inside a wide variety of cell types, triggering homology directed repair. CRISPR-Gold’s gold nanoparticles coat the donor DNA and also bind Cas9. When injected into mice, their cells recognize a marker in CRISPR-Gold and then import the delivery vessel. Then, through a series of cellular mechanisms, CRISPR-Gold is released into the cells’ cytoplasm and breaks apart, rapidly releasing Cas9 and donor DNA.

Schematic of CRISPR-Gold's method of action

CRISPR-Gold’s method of action (Click to enlarge).

A single injection of CRISPR-Gold into muscle tissue of mice that model Duchenne muscular dystrophy restored 5.4 percent of the dystrophin gene, which causes the disease, to the wild- type, or normal, sequence. This correction rate was approximately 18 times higher than in mice treated with Cas9 and donor DNA by themselves, which experienced only a 0.3 percent correction rate.

Importantly, the study authors note, CRISPR-Gold faithfully restored the normal sequence of dystrophin, which is a significant improvement over previously published approaches that only removed the faulty part of the gene, making it shorter and converting one disease into another, milder disease.

CRISPR-Gold was also able to reduce tissue fibrosis – the hallmark of diseases where muscles do not function properly – and enhanced strength and agility in mice with Duchenne muscular dystrophy. CRISPR-Gold-treated mice showed a two-fold increase in hanging time in a common test for mouse strength and agility, compared to mice injected with a control.

“These experiments suggest that it will be possible to develop non-viral CRISPR therapeutics that can safely correct gene mutations, via the process of homology-directed repair, by simply developing nanoparticles that can simultaneously encapsulate all of the CRISPR components,” Murthy said.

CRISPR-Cas9

CRISPR in action: A model of the Cas9 protein cutting a double-stranded piece of DNA

The study found that CRISPR-Gold’s approach to Cas9 protein delivery is safer than viral delivery of CRISPR, which, in addition to toxicity, amplifies the side effects of Cas9 through continuous expression of this DNA-cutting enzyme. When the research team tested CRISPR-Gold’s gene-editing capability in mice, they found that CRISPR-Gold efficiently corrected the DNA mutation that causes Duchenne muscular dystrophy, with minimal collateral DNA damage.

The researchers quantified CRISPR-Gold’s off-target DNA damage and found damage levels similar to the that of a typical DNA sequencing error in a typical cell that was not exposed to CRISPR (0.005 – 0.2 percent). To test for possible immunogenicity, the blood stream cytokine profiles of mice were analyzed at 24 hours and two weeks after the CRISPR-Gold injection. CRISPR-Gold did not cause an acute up-regulation of inflammatory cytokines in plasma, after multiple injections, or weight loss, suggesting that CRISPR-Gold can be used multiple times safely, and that it has a high therapeutic window for gene editing in muscle tissue.

“CRISPR-Gold and, more broadly, CRISPR-nanoparticles open a new way for safer, accurately controlled delivery of gene-editing tools,” Conboy said. “Ultimately, these techniques could be developed into a new medicine for Duchenne muscular dystrophy and a number of other genetic diseases.”

A clinical trial will be needed to discern whether CRISPR-Gold is an effective treatment for genetic diseases in humans. Study co-authors Kunwoo Lee and Hyo Min Park have formed a start-up company, GenEdit (Murthy has an ownership stake in GenEdit), which is focused on translating the CRISPR-Gold technology into humans. The labs of Murthy and Conboy are also working on the next generation of particles that can deliver CRISPR into tissues from the blood stream and would preferentially target adult stem cells, which are considered the best targets for gene correction because stem and progenitor cells are capable of gene editing, self-renewal and differentiation.

“Genetic diseases cause devastating levels of mortality and morbidity, and new strategies for treating them are greatly needed,” Murthy said. “CRISPR-Gold was able to correct disease-causing gene mutations in vivo, via the non-viral delivery of Cas9 protein, guide RNA and donor DNA, and therefore has the potential to develop into a therapeutic for treating genetic diseases.”

The study was funded by the National Institutes of Health, the W.M. Keck Foundation, the Moore Foundation, the Li Ka Shing Foundation, Calico, Packer, Roger’s and SENS, and the Center of Innovation (COI) Program of the Japan Science and Technology Agency.

Here’s a link to and a citation for the paper,

Nanoparticle delivery of Cas9 ribonucleoprotein and donor DNA in vivo induces homology-directed DNA repair by Kunwoo Lee, Michael Conboy, Hyo Min Park, Fuguo Jiang, Hyun Jin Kim, Mark A. Dewitt, Vanessa A. Mackley, Kevin Chang, Anirudh Rao, Colin Skinner, Tamanna Shobha, Melod Mehdipour, Hui Liu, Wen-chin Huang, Freeman Lan, Nicolas L. Bray, Song Li, Jacob E. Corn, Kazunori Kataoka, Jennifer A. Doudna, Irina Conboy, & Niren Murthy. Nature Biomedical Engineering (2017) doi:10.1038/s41551-017-0137-2 Published online: 02 October 2017

This paper is behind a paywall.

Cleaning up disasters with Hokusai’s blue and cellulose nanofibers to clean up contaminated soil and water in Fukushima

The Great Wave off Kanagawa (Under a wave off Kanagawa”), also known as The Great Wave or simply The Wave, by Katsushika Hokusai – Metropolitan Museum of Art, online database: entry 45434, Public Domain, https://commons.wikimedia.org/w/index.php?curid=2798407

I thought it might be a good idea to embed a copy of Hokusai’s Great Wave and the blue these scientists in Japan have used as their inspiration. (By the way, it seems these scientists collaborated with Mildred Dresselhaus who died at the age of 86, a few months after their paper was published. In honour of he and before the latest, here’s my Feb. 23, 2017 posting about the ‘Queen of Carbon’.)

Now onto more current news, from an Oct. 13, 2017 news item on Nanowerk (Note: A link has been removed),

By combining the same Prussian blue pigment used in the works of popular Edo-period artist Hokusai and cellulose nanofiber, a raw material of paper, a University of Tokyo research team succeeded in synthesizing compound nanoparticles, comprising organic and inorganic substances (Scientific Reports, “Cellulose nanofiber backboned Prussian blue nanoparticles as powerful adsorbents for the selective elimination of radioactive cesium”). This new class of organic/inorganic composite nanoparticles is able to selectively adsorb, or collect on the surface, radioactive cesium.

The team subsequently developed sponges from these nanoparticles that proved highly effective in decontaminating the water and soil in Fukushima Prefecture exposed to radioactivity following the nuclear accident there in March 2011.

I think these are the actual sponges not an artist’s impression,

Decontamination sponge spawned from current study
Cellulose nanofiber-Prussian blue compounds are permanently anchored in spongiform chambers (cells) in this decontamination sponge. It can thus be used as a powerful adsorbent for selectively eliminating radioactive cesium. © 2017 Sakata & Mori Laboratory.

An Oct. 13, 2017 University of Tokyo press release, which originated the news item, provides more detail about the sponges and the difficulties of remediating radioactive air and soil,

Removing radioactive materials such as cesium-134 and -137 from contaminated seawater or soil is not an easy job. First of all, a huge amount of similar substances with competing functions has to be removed from the area, an extremely difficult task. Prussian blue (ferric hexacyanoferrate) has a jungle gym-like colloidal structure, and the size of its single cubic orifice, or opening, is a near-perfect match to the size of cesium ions; therefore, it is prescribed as medication for patients exposed to radiation for selectively adsorbing cesium. However, as Prussian blue is highly attracted to water, recovering it becomes highly difficult once it is dissolved into the environment; for this reason, its use in the field for decontamination has been limited.

Taking a hint from the Prussian blue in Hokusai’s woodblock prints not losing their color even when getting wet from rain, the team led by Professor Ichiro Sakata and Project Professor Bunshi Fugetsu at the University of Tokyo’s Nanotechnology Innovation Research Unit at the Policy Alternatives Research Institute, and Project Researcher Adavan Kiliyankil Vipin at the Graduate School of Engineering developed an insoluble nanoparticle obtained from combining cellulose and Prussian blue—Hokusai had in fact formed a chemical bond in his handling of Prussian blue and paper (cellulose).

The scientists created this cellulose-Prussian blue combined nanoparticle by first preparing cellulose nanofibers using a process called TEMPO oxidization and securing ferric ions (III) onto them, then introduced a certain amount of hexacyanoferrate, which adhered to Prussian blue nanoparticles with a diameter ranging from 5–10 nanometers. The nanoparticles obtained in this way were highly resistant to water, and moreover, were capable of adsorbing 139 mg of radioactive cesium ion per gram.

Field studies on soil decontamination in Fukushima have been underway since last year. A highly effective approach has been to sow and allow plant seeds to germinate inside the sponge made from the nanoparticles, then getting the plants’ roots to take up cesium ions from the soil to the sponge. Water can significantly shorten decontamination times compared to soil, which usually requires extracting cesium from it with a solvent.

It has been more than six years since the radioactive fallout from a series of accidents at the Fukushima Daiichi nuclear power plant following the giant earthquake and tsunami in northeastern Japan. Decontamination with the cellulose nanofiber-Prussian blue compound can lead to new solutions for contamination in disaster-stricken areas.

“I was pondering about how Prussian blue immediately gets dissolved in water when I happened upon a Hokusai woodblock print, and how the indigo color remained firmly set in the paper, without bleeding, even after all these years,” reflects Fugetsu. He continues, “That revelation provided a clue for a solution.”

“The amount of research on cesium decontamination increased after the Chernobyl nuclear power plant accident, but a lot of the studies were limited to being academic and insufficient for practical application in Fukushima,” says Vipin. He adds, “Our research offers practical applications and has high potential for decontamination on an industrial scale not only in Fukushima but also in other cesium-contaminated areas.”

Here’s a link to and a citation for the paper,

Cellulose nanofiber backboned Prussian blue nanoparticles as powerful adsorbents for the selective elimination of radioactive cesium by Adavan Kiliyankil Vipin, Bunshi Fugetsu, Ichiro Sakata, Akira Isogai, Morinobu Endo, Mingda Li, & Mildred S. Dresselhaus. Scientific Reports 6, Article number: 37009 (2016) doi:10.1038/srep37009 Published online: 15 November 2016

This is open access.

Nanomesh for hypoallergenic wearable electronics

It stands to reason that sensors and monitoring devices held against the skin (wearable electronics) for long periods of time could provoke an allergic reaction. Scientists at the University of Tokyo have devised a possible solution according to a July 17, 2017 news item on ScienceDaily,

A hypoallergenic electronic sensor can be worn on the skin continuously for a week without discomfort, and is so light and thin that users forget they even have it on, says a Japanese group of scientists. The elastic electrode constructed of breathable nanoscale meshes holds promise for the development of noninvasive e-skin devices that can monitor a person’s health continuously over a long period.

Here’s an image illustrating the hypoallergenic electronics,

Caption: The electric current from a flexible battery placed near the knuckle flows through the conductor and powers the LED just below the fingernail. Credit: 2017 Someya Laboratory.

A University of Tokyo press release on EurekAlert, which originated the news item, expands on the theme,

Wearable electronics that monitor heart rate and other vital health signals have made headway in recent years, with next-generation gadgets employing lightweight, highly elastic materials attached directly onto the skin for more sensitive, precise measurements. However, although the ultrathin films and rubber sheets used in these devices adhere and conform well to the skin, their lack of breathability is deemed unsafe for long-term use: dermatological tests show the fine, stretchable materials prevent sweating and block airflow around the skin, causing irritation and inflammation, which ultimately could lead to lasting physiological and psychological effects.

“We learned that devices that can be worn for a week or longer for continuous monitoring were needed for practical use in medical and sports applications,” says Professor Takao Someya at the University of Tokyo’s Graduate School of Engineering whose research group had previously developed an on-skin patch that measured oxygen in blood.

In the current research, the group developed an electrode constructed from nanoscale meshes containing a water-soluble polymer, polyvinyl alcohol (PVA), and a gold layer–materials considered safe and biologically compatible with the body. The device can be applied by spraying a tiny amount of water, which dissolves the PVA nanofibers and allows it to stick easily to the skin–it conformed seamlessly to curvilinear surfaces of human skin, such as sweat pores and the ridges of an index finger’s fingerprint pattern.

The researchers next conducted a skin patch test on 20 subjects and detected no inflammation on the participants’ skin after they had worn the device for a week. The group also evaluated the permeability, with water vapor, of the nanomesh conductor–along with those of other substrates like ultrathin plastic foil and a thin rubber sheet–and found that its porous mesh structure exhibited superior gas permeability compared to that of the other materials.

Furthermore, the scientists proved the device’s mechanical durability through repeated bending and stretching, exceeding 10,000 times, of a conductor attached on the forefinger; they also established its reliability as an electrode for electromyogram recordings when its readings of the electrical activity of muscles were comparable to those obtained through conventional gel electrodes.

“It will become possible to monitor patients’ vital signs without causing any stress or discomfort,” says Someya about the future implications of the team’s research. In addition to nursing care and medical applications, the new device promises to enable continuous, precise monitoring of athletes’ physiological signals and bodily motion without impeding their training or performance.

Here’s a link to and a citation for the paper,

Inflammation-free, gas-permeable, lightweight, stretchable on-skin electronics with nanomeshes by Akihito Miyamoto, Sungwon Lee, Nawalage Florence Cooray, Sunghoon Lee, Mami Mori, Naoji Matsuhisa, Hanbit Jin, Leona Yoda, Tomoyuki Yokota, Akira Itoh, Masaki Sekino, Hiroshi Kawasaki, Tamotsu Ebihara, Masayuki Amagai, & Takao Someya. Nature Nanotechnology (2017) doi:10.1038/nnano.2017.125 Published online 17 July 2017

This paper is behind a paywall.

Making sense of the world with data visualization

A March 30, 2017 item on phys.org features an essay about data visualization,

The late data visionary Hans Rosling mesmerised the world with his work, contributing to a more informed society. Rosling used global health data to paint a stunning picture of how our world is a better place now than it was in the past, bringing hope through data.

Matt Escobar, postdoctoral researcher on machine learning applied to chemical engineering at the University of Tokyo, wrote this March 30, 2017 essay originally for The Conversation,

Now more than ever, data are collected from every aspect of our lives. From social media and advertising to artificial intelligence and automated systems, understanding and parsing information have become highly valuable skills. But we often overlook the importance of knowing how to communicate data to peers and to the public in an effective, meaningful way.

Hans Rosling paved the way for effectively communicating global health data. Vimeo

Data visualisation can take many other forms, just as data itself can be interpreted in many different ways. It can be used to highlight important achievements, as Bill and Melinda Gates have shown with their annual letters in which their main results and aspirations are creatively displayed.

Escobar goes on to explore a number of approaches to data visualization including this one,

Finding similarity between samples is another good starting point. Network analysis is a well-known technique that relies on establishing connections between samples (also called nodes). Strong connections between samples indicate a high level of similarity between features.

Once these connections are established, the network rearranges itself so that samples with like characteristics stick together. While before we were considering only the most relevant features of each live show and using that as reference, now all features are assessed simultaneously – similarity is more broadly defined.

Networks show a highly connected yet well-defined world.

The amount of information that can be visualised with networks is akin to dimensionality reduction, but the feature assessment aspect is now different. Whereas previously samples would be grouped based on a few specific marking features, in this tool samples that share many features stick together. That leaves it up to users to choose their approach based on their goals.

He finishes by noting that his essay is an introduction to a complex topic.

“Brute force” technique for biomolecular information processing

The research is being announced by the University of Tokyo but there is definitely a French flavour to this project. From a June 20, 2016 news item on ScienceDaily,

A Franco-Japanese research group at the University of Tokyo has developed a new “brute force” technique to test thousands of biochemical reactions at once and quickly home in on the range of conditions where they work best. Until now, optimizing such biomolecular systems, which can be applied for example to diagnostics, would have required months or years of trial and error experiments, but with this new technique that could be shortened to days.

A June 20, 2016 University of Tokyo news release on EurekAlert, which originated the news item, describes the project in more detail,

“We are interested in programming complex biochemical systems so that they can process information in a way that is analogous to electronic devices. If you could obtain a high-resolution map of all possible combinations of reaction conditions and their corresponding outcomes, the development of such reactions for specific purposes like diagnostic tests would be quicker than it is today,” explains Centre National de la Recherche Scientifique (CNRS) researcher Yannick Rondelez at the Institute of Industrial Science (IIS) [located at the University of Tokyo].

“Currently researchers use a combination of computer simulations and painstaking experiments. However, while simulations can test millions of conditions, they are based on assumptions about how molecules behave and may not reflect the full detail of reality. On the other hand, testing all possible conditions, even for a relatively simple design, is a daunting job.”

Rondelez and his colleagues at the Laboratory for Integrated Micro-Mechanical Systems (LIMMS), a 20-year collaboration between the IIS and the French CNRS, demonstrated a system that can test ten thousand different biochemical reaction conditions at once. Working with the IIS Applied Microfluidic Laboratory of Professor Teruo Fujii, they developed a platform to generate a myriad of micrometer-sized droplets containing random concentrations of reagents and then sandwich a single layer of them between glass slides. Fluorescent markers combined with the reagents are automatically read by a microscope to determine the precise concentrations in each droplet and also observe how the reaction proceeds.

“It was difficult to fine-tune the device at first,” explains Dr Anthony Genot, a CNRS researcher at LIMMS. “We needed to create generate thousands of droplets containing reagents within a precise range of concentrations to produce high resolution maps of the reactions we were studying. We expected that this would be challenging. But one unanticipated difficulty was immobilizing the droplets for the several days it took for some reactions to unfold. It took a lot of testing to create a glass chamber design that was airtight and firmly held the droplets in place.” Overall, it took nearly two years to fine-tune the device until the researchers could get their droplet experiment to run smoothly.

Seeing the new system producing results was revelatory. “You start with a screen full of randomly-colored dots, and then suddenly the computer rearranges them into a beautiful high-resolution map, revealing hidden information about the reaction dynamics. Seeing them all slide into place to produce something that had only ever been seen before through simulation was almost magical,” enthuses Rondelez.

“The map can tell us not only about the best conditions of biochemical reactions, it can also tell us about how the molecules behave in certain conditions. Using this map we’ve already found a molecular behavior that had been predicted theoretically, but had not been shown experimentally. With our technique we can explore how molecules talk to each other in test tube conditions. Ultimately, we hope to illuminate the intimate machinery of living molecular systems like ourselves,” says Rondelez.

Here’s a link to and a citation for the paper,

High-resolution mapping of bifurcations in nonlinear biochemical circuits by A. J. Genot, A. Baccouche, R. Sieskind, N. Aubert-Kato, N. Bredeche, J. F. Bartolo, V. Taly, T. Fujii, & Y. Rondelez. Nature Chemistry (2016)
doi:10.1038/nchem.2544 Published online 20 June 2016

This paper is behind a paywall.

pH dependent nanoparticle-based contrast agent for MRIs (magnetic resonance images)

This news about a safer and more effective contrast agent for MRIs (magnetic resonance images) developed by Japanese scientists come from a June 6, 2016 article by Heather Zeiger on phys.org. First some explanations,

Magnetic resonance imaging relies on the excitation and subsequent relaxation of protons. In clinical MRI studies, the signal is determined by the relaxation time of the hydrogen protons in water. To get a stronger signal, scientists can use contrast agents to shorten the relaxation time of the protons.

MRI is non-invasive and does not involve radiation, making it a safe diagnostic tool. However, its weak signal makes tumor detection difficult. The ideal contrast agent would select for malignant tumors, making its location and diagnosis much more obvious.

Nanoparticle contrast agents have been of interested because nanoparticles can be functionalized and, as in this study, can contain various metals. Researchers have attempted to functionalize nanoparticles with ligands that attach to chemical factors on the surface of cancer cells. However, cancer cells tend to be compositionally heterogeneous, leading some researchers to look for nanoparticles that respond to differences in pH or redox potential compared to normal cells.

Now for the research,

Researchers from the University of Tokyo, Tokyo Institute of Technology, Kawasaki Institute of Industry Promotion, and the Japan Agency for Quantum and Radiological Science and Technology have developed a contrast agent from calcium phosphate-based nanoparticles that release a manganese ion an acidic environment. …

Peng Mi, Daisuke Kokuryo, Horacio Cabral, Hailiang Wu, Yasuko Terada, Tsuneo Saga, Ichio Aoki, Nobuhiro Nishiyama, and Kazunori Kataoka developed a contrast agent that is comprised of Mn2+– doped CaP nanoparticles with a PEG shell. They reasoned that using CaP nanoparticles, which are known to be pH sensitive, would allow the targeted release of Mn2+ ions in the tumor microenvironment. The tumor microenvironment tends to have a lower pH than the normal regions to rapid cell metabolism in an oxygen-depleted environment. Manganese ions were tested because they are paramagnetic, which makes for a good contrast agent. They also bind to proteins creating a slowly rotating manganese-protein system that results in sharp contrast enhancement.

These results were promising, so Peng Mi, et al. then tested whether the CaPMnPEG contrast agent worked in solid tumors. Because Mn2+ remains confined within the nanoparticle matrix at physiological pH, CaPMnPEG demonstrate a much lower toxicity [emphasis mine] compared to MnCl2. MRI studies showed a tumor-to-normal contrast of 131% after 30 minute, which is much higher than Gd-DTPA [emphasis mine], a clinically approved contrast agent. After an hour, the tumor-to-normal ratio was 160% and remained around 170% for several hours.

Three-dimensional MRI studies of solid tumors showed that without the addition of CaPMnPEG, only blood vessels were visible. However, upon adding CaPMnPEG, the tumor was easily distinguishable. Additionally, there is evidence that excess Mn2+ leaves the plasma after an hour. The contrast signal remained strong for several hours indicating that protein binding rather than Mn2+ concentration is important for signal enhancement.

Finally, tests with metastatic tumors in the liver (C26 colon cancer cells) showed that CaPMnPEG works well in solid organ analysis and is highly sensitive to detecting millimeter-sized micrometastasis [emphasis mine]. Unlike other contrast agents used in the clinic, CaPMnPEG provided a contrast signal that lasted for several hours after injection. After an hour, the signal was enhanced by 25% and after two hours, the signal was enhanced by 39%.

This is exciting stuff. Bravo to the researchers!

Here’s a link to and citation for the paper,

A pH-activatable nanoparticle with signal-amplification capabilities for non-invasive imaging of tumour malignancy by Peng Mi, Daisuke Kokuryo, Horacio Cabral, Hailiang Wu, Yasuko Terada, Tsuneo Saga, Ichio Aoki, Nobuhiro Nishiyama, & Kazunori Kataoka. Nature Nanotechnology (2016) doi:10.1038/nnano.2016.72 Published online 16 May 2016

This paper is behind a paywall.

Fingertip pressure sensors from Japan

Pressure sensor The pressure sensors wraps around and conforms to the shape of the fingers while still accurately measuring pressure distribution. © 2016 Someya Laboratory.

Pressure sensor
The pressure sensors wraps around and conforms to the shape of the fingers while still accurately measuring pressure distribution.
© 2016 Someya Laboratory.

Those fingertip sensors could be jewellery but they’re not. From a March 8, 2016 news item on Nanowerk (Note: A link has been removed),

Researchers at the University of Tokyo working with American colleagues have developed a transparent, bendable and sensitive pressure sensor (“A Transparent, Bending Insensitive Pressure Sensor”). Healthcare practitioners may one day be able to physically screen for breast cancer using pressure-sensitive rubber gloves to detect tumors, owing to this newly developed sensor.

A March 7, 2016 University of Tokyo press release, which originated the news item, expands on the theme,

Conventional pressure sensors are flexible enough to fit to soft surfaces such as human skin, but they cannot measure pressure changes accurately once they are twisted or wrinkled, making them unsuitable for use on complex and moving surfaces. Additionally, it is difficult to reduce them below 100 micrometers thickness because of limitations in current production methods.

To address these issues, an international team of researchers led by Dr. Sungwon Lee and Professor Takao Someya of the University of Tokyo’s Graduate School of Engineering has developed a nanofiber-type pressure sensor that can measure pressure distribution of rounded surfaces such as an inflated balloon and maintain its sensing accuracy even when bent over a radius of 80 micrometers, equivalent to just twice the width of a human hair. The sensor is roughly 8 micrometers thick and can measure the pressure in 144 locations at once.

The device demonstrated in this study consists of organic transistors, electronic switches made from carbon and oxygen based organic materials, and a pressure sensitive nanofiber structure. Carbon nanotubes and graphene were added to an elastic polymer to create nanofibers with a diameter of 300 to 700 nanometers, which were then entangled with each other to form a transparent, thin and light porous structure.

“We’ve also tested the performance of our pressure sensor with an artificial blood vessel and found that it could detect small pressure changes and speed of pressure propagation,” says Lee. He continues, “Flexible electronics have great potential for implantable and wearable devices. I realized that many groups are developing flexible sensors that can measure pressure but none of them are suitable for measuring real objects since they are sensitive to distortion. That was my main motivation and I think we have proposed an effective solution to this problem.”

Here’s a link to and a citation for the paper,

A transparent bending-insensitive pressure sensor by Sungwon Lee, Amir Reuveny, Jonathan Reeder, Sunghoon Lee, Hanbit Jin, Qihan Liu, Tomoyuki Yokota, Tsuyoshi Sekitani, Takashi Isoyama, Yusuke Abe, Zhigang Suo & Takao Someya. Nature Nanotechnology (2016)  doi:10.1038/nnano.2015.324 Published online 25 January 2016

This paper is behind a paywall.

Origami and our pop-up future

They should have declared Jan. 25, 2016 ‘L. Mahadevan Day’ at Harvard University. The researcher was listed as an author on two major papers. I covered the first piece of research, 4D printed hydrogels, in this Jan. 26, 2016 posting. Now for Mahadevan’s other work, from a Jan. 27, 2016 news item on Nanotechnology Now,

What if you could make any object out of a flat sheet of paper?

That future is on the horizon thanks to new research by L. Mahadevan, the Lola England de Valpine Professor of Applied Mathematics, Organismic and Evolutionary Biology, and Physics at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS). He is also a core faculty member of the Wyss Institute for Biologically Inspired Engineering, and member of the Kavli Institute for Bionano Science and Technology, at Harvard University.

Mahadevan and his team have characterized a fundamental origami fold, or tessellation, that could be used as a building block to create almost any three-dimensional shape, from nanostructures to buildings. …

A Jan. 26, 2016 Harvard University news release by Leah Burrows, which originated the news item, provides more detail about the specific fold the team has been investigating,

The folding pattern, known as the Miura-ori, is a periodic way to tile the plane using the simplest mountain-valley fold in origami. It was used as a decorative item in clothing at least as long ago as the 15th century. A folded Miura can be packed into a flat, compact shape and unfolded in one continuous motion, making it ideal for packing rigid structures like solar panels.  It also occurs in nature in a variety of situations, such as in insect wings and certain leaves.

“Could this simple folding pattern serve as a template for more complicated shapes, such as saddles, spheres, cylinders, and helices?” asked Mahadevan.

“We found an incredible amount of flexibility hidden inside the geometry of the Miura-ori,” said Levi Dudte, graduate student in the Mahadevan lab and first author of the paper. “As it turns out, this fold is capable of creating many more shapes than we imagined.”

Think surgical stents that can be packed flat and pop-up into three-dimensional structures once inside the body or dining room tables that can lean flat against the wall until they are ready to be used.

“The collapsibility, transportability and deployability of Miura-ori folded objects makes it a potentially attractive design for everything from space-bound payloads to small-space living to laparoscopic surgery and soft robotics,” said Dudte.

Here’s a .gif demonstrating the fold,

This spiral folds rigidly from flat pattern through the target surface and onto the flat-folded plane (Image courtesy of Mahadevan Lab) Harvard University

This spiral folds rigidly from flat pattern through the target surface and onto the flat-folded plane (Image courtesy of Mahadevan Lab) Harvard University

The news release offers some details about the research,

To explore the potential of the tessellation, the team developed an algorithm that can create certain shapes using the Miura-ori fold, repeated with small variations. Given the specifications of the target shape, the program lays out the folds needed to create the design, which can then be laser printed for folding.

The program takes into account several factors, including the stiffness of the folded material and the trade-off between the accuracy of the pattern and the effort associated with creating finer folds – an important characterization because, as of now, these shapes are all folded by hand.

“Essentially, we would like to be able to tailor any shape by using an appropriate folding pattern,” said Mahadevan. “Starting with the basic mountain-valley fold, our algorithm determines how to vary it by gently tweaking it from one location to the other to make a vase, a hat, a saddle, or to stitch them together to make more and more complex structures.”

“This is a step in the direction of being able to solve the inverse problem – given a functional shape, how can we design the folds on a sheet to achieve it,” Dudte said.

“The really exciting thing about this fold is it is completely scalable,” said Mahadevan. “You can do this with graphene, which is one atom thick, or you can do it on the architectural scale.”

Co-authors on the study include Etienne Vouga, currently at the University of Texas at Austin, and Tomohiro Tachi from the University of Tokyo. …

Here’s a link to and a citation for the paper,

Programming curvature using origami tessellations by Levi H. Dudte, Etienne Vouga, Tomohiro Tachi, & L. Mahadevan. Nature Materials (2016) doi:10.1038/nmat4540 Published online 25 January 2016

This paper is behind a paywall.

Happy Thanksgiving! Oct. 12, 2015, my last mention of science debates in the Canadian 2015 federal election, and my 4001st posting

Two things for me to celebrate today: Thanksgiving (in Canada, we celebrate on the 2nd Monday of October) and my 4001st posting (this one).

Science for the people

Plus, there’s much to celebrate about science discussion during the 2015 Canadian federal election. I stumbled across Science for the People, which is a weekly radio show based in Canada (from the About page),

Science for the People is a syndicated radio show and podcast that broadcasts weekly across North America. We are a long-format interview show that explores the connections between science, popular culture, history, and public policy, to help listeners understand the evidence and arguments behind what’s in the news and on the shelves.

Every week, our hosts sit down with science researchers, writers, authors, journalists, and experts to discuss science from the past, the science that affects our lives today, and how science might change our future.

Contact

If you have comments, show ideas, or questions about Science for the People, email feedback@scienceforthepeople.ca.

Theme Song

Our theme song music comes from the song “Binary Consequence” by the band Fractal Pattern. You can find the full version of it on their album No Hope But Mt. Hope.

License & Copyright

All Science for the People episodes are under the Creative Commons license. You are free to distribute unedited versions of the episodes for non-commercial purposes. If you would like to edit the episode please contact us.

Episode #338 (2015 Canadian federal election and science) was originally broadcast on Oct. 9,  2015 and features,

This week, we’re talking about politics, and the prospects for pro-science politicians, parties and voters in Canada. We’ll spend the hour with panelists Katie Gibbs, Executive Director of Evidence for Democracy, science librarian John Dupuis, journalist Mike De Souza, and former Canadian government scientist Steven Campana, for an in-depth discussion about the treatment of science by the current Canadian government, and what’s at stake for science in the upcoming federal election.

The podcast is approximately one hour long and Désirée Schell (sp?) hosts/moderates an interesting discussion where one of the participants notes that issues about science and science muzzles predate Harper. The speaker dates the issues back to the Chrétien/Martin years. Note: Jean Chrétien was Prime Minister from 1993 to 2003 and Paul Martin, his successor, was Prime Minister from 2003 to 2006 when he was succeeded by current Prime Minister, Stephen Harper. (I attended a Philosophers’ Cafe event on Oct. 1, 2015 where the moderator dated the issues back to the Mulroney years. Note: Brian Mulroney was Prime Minister from 1984 – 1993.) So, it’s been 10, 20, or 30 years depending on your viewpoint and when you started noticing (assuming you’re of an age to have noticed something happening 30 years ago).

The participants also spent some time discussing why Canadians would care about science. Interestingly, one of the speakers claimed the current Syrian refugee crisis has its roots in climate change, a science issue, and he noted the US Dept. of Defense views climate change as a threat multiplier. For anyone who doesn’t know, the US Dept. of Defense funds a lot of science research.

It’s a far ranging discussion, which doesn’t really touch on science as an election issue until some 40 mins. into the podcast.

One day later on Oct. 10, 2015 (where you’ll find the podcast), the Canadian Broadcasting Corporation’s Quirks & Quarks radio programme broadcast and made available its podcast of a 2015 Canadian election science debate/panel,

There is just over a week to go before Canadians head to the polls to elect a new government. But one topic that hasn’t received much attention on the campaign trail is science.

So we thought we’d gather together candidates from each of the major federal parties to talk about science and environmental issues in this election.

We asked each of them where they and their parties stood on federal funding of science; basic vs. applied research; the controversy around federal scientists being permitted to speak about their research, and how to cut greenhouse gas emissions while protecting jobs and the economy.

Our panel of candidates were:

– Lynne Quarmby, The Green Party candidate [and Green Party Science critic] in Burnaby North-Seymour, and  professor and Chair of the Department of Molecular Biology and Biochemistry at Simon Fraser University

– Gary Goodyear, Conservative Party candidate in Cambridge, Ontario, and former Minister of State for Science and Technology

– Marc Garneau, Liberal Party candidate in NDG-Westmount, and a former Canadian astronaut

– Megan Leslie, NDP candidate in Halifax and her party’s environment critic

It was a crackling debate. Gary Goodyear was the biggest surprise in that he was quite vigorous and informed in his defence of the government’s track record. Unfortunately, he was also quite patronizing.

The others didn’t seem to have as much information and data at their fingertips. Goodyear quote OECD reports of Canada doing well in the sciences and they didn’t have any statistics of their own to provide a counter argument. Quarmby, Garneau, and Leslie did at one time or another come back strongly on one point or another but none of them seriously damaged Goodyear’s defense. I can’t help wondering if Kennedy Stewart, NDP science critic, or Laurin Liu, NDP deputy science critic, and Ted Hsu, Liberal science critic might have been better choices for this debate.

The Quirks & Quarks debate was approximately 40 or 45 mins. with the remainder of the broadcast devoted to Canadian 2015 Nobel Prize winner in Physics, Arthur B. McDonald (Takaaki Kajita of the University of Tokyo shared the prize) for the discovery of neutrino oscillations, i.e., neutrinos have mass.

Kate Allen writing an Oct. 9, 2015 article for thestar.com got a preview of the pretaped debate and excerpted a few of the exchanges,

On science funding

Gary Goodyear: Currently, we spend more than twice what the Liberals spent in their last year. We have not cut science, and in fact our science budget this year is over $10 billion. But the strategy is rather simple. We are very strong in Canada on basic research. Where we fall down sometimes as compared to other countries is moving the knowledge that we discover in our laboratories out of the laboratory onto our factory floors where we can create jobs, and then off to the hospitals and living rooms of the world — which is how we make that home run. No longer is publishing an article the home run, as it once was.

Lynne Quarmby: I would take issue with the statement that science funding is robust in this country … The fact is that basic scientific research is at starvation levels. Truly fundamental research, without an obvious immediate application, is starving. And that is the research that is feeding the creativity — it’s the source of new ideas, and new understanding about the world, that ultimately feeds innovation.

If you’re looking for a good representation of the discussion and you don’t have time to listen to the podcast, Allen’s article is a good choice.

Finally, Research2Reality, a science outreach and communication project I profiled earlier in 2015 has produced an Oct. 9, 2015 election blog posting by Karyn Ho, which in addition to the usual ‘science is dying in Canada’ talk includes links to more information and to the official party platforms, as well as, an exhortation to get out there and vote.

Something seems to be in the air as voter turnout for the advance polls is somewhere from 24% to 34% higher than usual.

Happy Thanksgiving!

ETA Oct. 14, 2015:  There’s been some commentary about the Quirks & Quarks debate elsewhere. First, there’s David Bruggeman’s Oct. 13, 2015 post on his Pasco Phronesis blog (Note: Links have been removed),

Chalk it up to being a Yank who doesn’t give Canadian science policy his full attention, but one thing (among several) I learned from the recent Canadian cross-party science debate concerns open access policy.

As I haven’t posted anything on Canadian open access policies since 2010, clearly I need to catch up.  I am assuming Goodyear is referring to the Tri-Agency Open Access Policy, introduced in February by his successor as Minister of State for Science and Technology.  It applies to all grants issued from May 1, 2015 and forward (unless the work was already applicable to preexisting government open access policy), and applies most of the open access policy of the Canadian Institutes for Health Research (CIHR) to the other major granting agencies (the Natural Sciences and Engineering Research Council of Canada and the Social Sciences and Humanities Research Council of Canada).

The policy establishes that grantees must make research articles coming from their grants available free to the public within 12 months of publication. …

Then, there’s Michael Rennie, an Assistant Professor at Lakehead University and a former Canadian government scientist whose Oct. 14, 2015 posting on his unmuzzled science blog notes this,

This [Gary Goodyear’s debate presentation] pissed me off so much it made me come out of retirement on this blog.

Listening to Gary Goodyear (Conservative representative, and MP in Cambridge and former Minister of State for Science and Technology), I became furious with the level of misinformation given. …

Rennie went ahead and Storified the twitter responses to the Goodyear’s comments (Note: Links have been removed),

Here’s my Storify of tweets that help clarify a good deal of the misinformation Gary Goodyear presented during the debate, as well as some rebuttals from folks who are in the know: I was a Canadian Government Scientist with DFO [Department of Fisheries and Oceans] from 2010-2014, and was a Research Scientist at the Experimental Lakes Area [ELA], who heard about the announcement regarding the intention of the government to close the facility first-hand on the telephone at ELA.

Goodyear: “I was involved in that decision. With respect to the Experimental Lakes, we never said we would shut it down. We said that we wanted to transfer it to a facility that was better suited to operate it. And that’s exactly what we’ve done. Right now, DFO is up there undertaking some significant remediation effects to clean up those lakes that are contaminated by the science that’s been going on up there. We all hope these lakes will recover soon so that science and experimentation can continue but not under the federal envelope. So it’s secure and it’s misleading to suggest that we were trying to stop science there.”
There’s so many inaccuracies in here, it’s hard to know where to start. First, Goodyear’s assertion that there are “contaminated lakes” at ELA is nonsense. Experiments conducted there are done using environmentally-relevant exposures; in other words, what you’d see going on somewhere else on earth, and in every case, each lake has recovered to it’s natural state, simply by stopping the experiment.

Second, there ARE experiments going on at ELA currently, many of which I am involved in; the many tours, classes and researchers on site this year can attest to this.

Third, this “cleanup” that is ongoing is to clean up all the crap that was left behind by DFO staff during 40 years of experiments- wood debris, old gear, concrete, basically junk that was left on the shorelines of lakes. No “lake remediation” to speak of.

Fourth, the conservative government DID stop science at ELA- no new experiments were permitted to begin, even ones that were already funded and on the books like the nanosilver experiment which was halted until 2014, jeopardizing the futures the futures of many students involved. Only basic monitoring occurred between 2012-2014.

Last, the current government deserves very little credit for the transfer of ELA to another operator; the successful move was conceived and implemented largely by other people and organizations, and the attempts made by the government to try and move the facility to a university were met with incredulity by the deans and vice presidents invited to the discussion.

There’s a lot more and I strongly recommend reading Rennie’s Storify piece.

It was unfortunate that the representatives from the other parties were not able to seriously question Goodyear’s points.

Perhaps next time (fingers crossed), the representatives from the various parties will be better prepared. I’d also like to suggest that there be some commentary from experts afterwards in the same way the leaders’ debates are followed by commentary. And while I’m dreaming, maybe there could be an opportunity for phone-in or Twitter questions.