Tag Archives: Stanford University

Immune to CRISPR?

I guess if you’re going to use bacteria as part of your gene editing technology (CRISPR [clustered regularly interspaced short palindromic repeats]/Cas9) then, you might half expect the body’s immune system may have developed some defenses. A Jan. 9, 2018 article by Sarah Zhang for The Atlantic provides some insight into what the new research suggests (Note: Links have been removed),

2018 is supposed to be the year of CRISPR in humans. The first U.S. and European clinical trials that test the gene-editing tool’s ability to treat diseases—such as sickle-cell anemia, beta thalassemia, and a type of inherited blindness—are slated to begin this year.

But the year has begun on a cautionary note. On Friday [January 5, 2018], Stanford researchers posted a preprint (which has not been peer reviewed) to the website biorXiv highlighting a potential obstacle to using CRISPR in humans: Many of us may already be immune to it. That’s because CRISPR actually comes from bacteria that often live on or infect humans, and we have built up immunity to the proteins from these bacteria over our lives.

Not all CRISPR therapies in humans will be doomed. “We don’t think this is the end of the story. This is the start of the story,” says Porteus [Matthew Porteus, a pediatrician and stem-cell researcher at Stanford]. There are likely ways around the problem of immunity to CRISPR proteins, and many of the early clinical trials appear to be designed around this problem.

Porteus and his colleagues focused on two versions of Cas9, the bacterial protein mostly commonly used in CRISPR gene editing. One comes from Staphylococcus aureus, which often harmlessly lives on skin but can sometimes causes staph infections, and another from Streptococcus pyogenes, which causes strep throat but can also become “flesh-eating bacteria” when it spreads to other parts of the body. So yeah, you want your immune system to be on guard against these bacteria.

The human immune system has a couple different ways of recognizing foreign proteins, and the team tested for both. First, they looked to see if people have molecules in their blood called antibodies that can specifically bind to Cas9. Among 34 people they tested, 79 percent had antibodies against the staph Cas9 and 65 percent against the strep Cas9.

The Stanford team only tested for preexisting immunity against Cas9, but anytime you inject a large bacterial protein into the human body, it can provoke an immune response. After all, that’s how the immune system learns to fight off bacteria it’s never seen before. (Preexisting immunity can make the response faster and more robust, though.)

The danger of the immune system turning on a patient’s body hangs over a lot of research into correcting genes. In the late 1990s and 2000s, research into gene therapy was derailed by the death of 18-year-old Jesse Gelsinger, who died from an immune reaction to the virus used to deliver the corrected gene. This is the worst-case scenario that the CRISPR world hopes to avoid.

Here’s a link to and a citation for the preprint,

Identification of Pre-Existing Adaptive Immunity to Cas9 Proteins in Humans by Carsten Trevor Charlesworth, Priyanka S Deshpande, Daniel P Dever, Beruh Dejene, Natalia Gomez-Ospina, Sruthi Mantri, Mara Pavel-Dinu, Joab Camarena, Kenneth I Weinberg, Matthew H Porteus. bioRxiv posted January 5, 2018 doi: https://doi.org/10.1101/243345

This article is a preprint and has not been peer-reviewed …

This preprint (not yet published paper) is open access and open for feedback.

Meanwhile, the year of CRISPR takes off (from a January 10, 2018 American Chemical Society news release on EurekAlert),

This year could be a defining one for CRISPR, the gene editing technique, which has been hailed as an important breakthrough in laboratory research. That’s because the first company-sponsored clinical studies will be conducted to see if it can help treat diseases in humans, according to an article in Chemical & Engineering News (C&EN), the weekly newsmagazine of the American Chemical Society.

C&EN Assistant Editor Ryan Cross reports that a big push is coming from industry, specifically from three companies that are each partly founded by one of the three inventors of the method. They are zeroing in on the blood diseases called sickle-cell anemia and β-thalassemia, mostly because their precise cause is known. In these diseases, hemoglobin doesn’t function properly, leading to severe health issues in some people. Crispr Therapeutics and Intellia Therapeutics plan to test the technique to boost levels of an alternative version of healthy hemoglobin. Editas Medicine, however, will also use CRISPR to correct mutations in the faulty hemoglobin gene. Labs led by university researchers are also joining the mix, starting or continuing clinical trials with the approach in 2018.

Because CRISPR is being used to cut a cell’s DNA and insert a new sequence, concerns have been raised about the potential for accidents. A cut in the wrong place could mean introducing a new mutation that could be benign — or cancerous. But according to proponents of the method, researchers are conducting extensive computer predictions and in vitro tests to help avoid this outcome.

The January 8, 2018 Chemical and Engineering News (C&EN) open access article by Ryan Cross is here.

Finally, if you are interested in how this affects research as it’s being developed, there’s University of British Columbia researcher Rosie Redfield’s January 16, 2018 posting on RRResearch blog,

Thursday’s [January 11, 2018] post described the hypothesis that bacteria might use gene transfer agent particles to inoculate other cells in the population with fragments of phage DNA, and outlined an experiment to test this.  Now I’m realizing that I need to know a lot more about the kind of immunity I should expect to see if this GTA-as-vaccine hypothesis is correct.

That should give you some idea of what I meant by “research as it’s being developed.” Redfield’s blog is not for the mildly interested.

Redfield is well-known internationally as being one of the first to refute research which suggested the existence of an ‘arsenic bacterium’ (see my Dec. 8, 2010 posting: My apologies for arsenic blooper. She’s first mentioned in the second excerpt, second paragraph.) The affair was known online as #arseniclife. There’s a May 27, 2011 essay by Carl Zimmer on Slate titled: The Discovery of Arsenic-Based Twitter: How #arseniclife changed science.

EuroScience Open Forum in Toulouse, France from July 9 to July 14, 2018

A March 22, 2018 EuroScience Open Forum (ESOF) 2018 announcement (received via email) trumpets some of the latest news for this event being held July 9 to July 14, 2018 in Toulouse, France. (Located in the south in the region known as the Occitanie, it’s the fourth largest city in France. Toulouse is situated on the River Garonne. See more in its Wikipedia entry.) Here’s the latest from the announcement,

ESOF 2018 Plenary Sessions

Top speakers and hot topics confirmed for the Plenary Sessions at ESOF 2018

Lorna Hughes, Professor at the University of Glasgow, Chair of the Europeana Research Advisory Board, will give a plenary keynote on “Digital humanities”. John Ioannidis, Professor of Medicine and of Health Research and Policy at Stanford University, famous for his PLoS Medicine paper on “Why most Published Research Findings are False”, will talk about “Reproducibility”. A third plenary will involve Marìa Teresa Ruiz, a Chilean astronomer and the 2017 L’Oreal UNESCO award for Women in Science: she will talk about exoplanets.

 

ESOF under the spotlights

French President’s high patronage: ESOF is at the top of the institutional agendas in 2018.

“Sharing science”. But also putting science at the highest level making it a real political and societal issue in a changing world. ESOF 2018 has officially received the “High Patronage” from the President of the French Republic Emmanuel Macron. ESOF 2018 has also been listed by the French Minister for Europe and Foreign Affairs among the 27 priority events for France.

A constellation of satellites around the ESOF planet!

Second focus on Satellite events:
4th GEO Blue Planet Symposium organised 4-6 July by Mercator Ocean.
ECSJ 2018, 5th European Conference of Science Journalists, co-organised by the French Association of Science Journalists in the News Press (AJSPI) and the Union of European Science Journalists’ Associations (EUSJA) on 8 July.
– Esprit de Découvertes (Discovery spirit) organised by the Académie des Sciences, Inscriptions et Belles Lettres de Toulouse on 8 July.

More Satellite events to come! Don’t forget to stay long enough in order to participate in these focused Satellite Events and … to discover the city.

The programme for ESOF 2018 can be found here.

Science meets poetry

As has become usual, there is a European City of Science event being held in Toulouse in concert (more or less) with and in celebration of the ESOF event. The City of Science event is being held from July 7 – July 16, 2018.

Organizers have not announced much in the way of programming for the City of Science other than a ‘Science meets Poetry’ meeting,

A unique feature of ESOF is the Science meets Poetry day, which is held at every Forum and brings poets and scientists together.

Indeed, there is today a real artistic movement of poets connected with ESOF. Famous participants from earlier meetings include contributors such as the late Seamus Heaney, Roald Hoffmann [sic] Jean-Pierre Luminet and Prince Henrik of Denmark, but many young and aspiring poets are also involved.

The meeting is in two parts:

  • lectures on subjects involving science with poetry
  • a poster session for contributed poems

There are competitions associated with the event and every Science meets Poetry day gives rise to the publication of Proceedings in book form.

In Toulouse, the event will be staged by EuroScience in collaboration with the Académie des Jeux Floraux of Toulouse, the Société des Poètes Français and the European Academy of Sciences Arts and Letters, under patronage of UNESCO. The full programme will be announced later, but includes such themes as a celebration of the number 7 in honour of the seven Troubadours of Toulouse, who held the first Jeux Floraux in the year 1323, Space Travel and the first poets and scientists who wrote about it (including Cyrano de Bergerac and Johannes Kepler), from Metrodorus and Diophantes of Alexandria to Fermat’s Last Theorem, the Poetry of Ecology, Lafayette’s ship the Hermione seen from America and many other thought-provoking subjects.

The meeting will be held in the Hôtel d’Assézat, one of the finest old buildings of the ancient city of Toulouse.

Exceptionally, it will be open to registered participants from ESOF and also to some members of the public within the limits of available space.

Tentative Programme for the Science meets Poetry day on the 12th of July 2018

(some Speakers are still to be confirmed)

  • 09:00 – 09:30 A welcome for the poets : The legendary Troubadours of Toulouse and the poetry of the number 7 (Philippe Dazet-Brun, Académie des Jeux Floraux)
  • 09:30 – 10:00 The science and the poetry of violets from Toulouse (Marie-Thérèse Esquerré-Tugayé  Laboratoire de Recherche en Sciences Végétales, Université Toulouse III-CNRS)
  • 10:00 –10:30  The true Cyrano de Bergerac, gascon poet, and his celebrated travels to the Moon (Jean-Charles Dorge, Société des Poètes Français)
  • 10:30 – 11:00  Coffee Break (with poems as posters)
  • 11:00 – 11:30 Kepler the author and the imaginary travels of the famous astronomer to the Moon. (Uli Rothfuss, die Kogge International Society of German-language authors )
  • 11:30 – 12:00  Spoutnik and Space in Russian Literature (Alla-Valeria Mikhalevitch, Laboratory of the Russian Academy of Sciences  Saint-Petersburg)
  • 12:00 – 12:30  Poems for the planet Mars (James Philip Kotsybar, the ‘Bard of Mars’, California and NASA USA)
  • 12:30 – 14:00  Lunch and meetings of the Juries of poetry competitions
  • 14:00 – 14:30  The voyage of the Hermione and « Lafayette, here we come ! » seen by an American poet (Nick Norwood, University of Columbus Ohio)
  • 14:30 –  15:00 Alexandria, Toulouse and Oxford : the poem rendered by Eutrope and Fermat’s Last Theorem (Chaunes [Jean-Patrick Connerade], European Academy of Sciences, Arts and Letters, UNESCO)
  • 15:00 –15:30  How biology is celebrated in contemporary poetry (Assumpcio Forcada, biologist and poet from Barcelona)
  • 15:30 – 16:00  A book of poems around ecology : a central subject in modern poetry (Sam Illingworth, Metropolitan University of Manchester)
  • 16:00 – 16:30  Coffee break (with poems as posters)
  • 16:30 – 17:00 Toulouse and Europe : poetry at the crossroads of European Languages (Stefka Hrusanova (Bulgarian Academy and Linguaggi-Di-Versi)
  • 17:00 – 17:30 Round Table : seven poets from Toulouse give their views on the theme : Languages, invisible frontiers within both science and poetry
  • 17:30 – 18:00 The winners of the poetry competitions are announced
  • 18:00 – 18:15 Chaunes. Closing remarks

I’m fascinated as in all the years I’ve covered the European City of Science events I’ve never before tripped across a ‘Science meets Poetry’ meeting. Sadly, there’s no contact information for those organizers. However, you can sign up for a newsletter and there are contacts for the larger event, European City of Science or as they are calling it in Toulouse, the Science in the City Festival,

Contact

Camille Rossignol (Toulouse Métropole)

camille.rossignol@toulouse-metropole.fr

+33 (0)5 36 25 27 83

François Lafont (ESOF 2018 / So Toulouse)

francois.lafont@toulouse2018.esof.eu

+33 (0)5 61 14 58 47

Travel grants for media types

One last note and this is for journalists. It’s still possible to apply for a travel grant, which helps ease but not remove the pain of travel expenses. From the ESOF 2018 Media Travel Grants webpage,

ESOF 2018 – ECSJ 2018 Travel Grants

The 5th European Conference of Science Journalists (ECSJ2018) is offering 50 travel + accommodation grants of up to 400€ to international journalists interested in attending ECSJ and ESOF.

We are looking for active professional journalists who cover science or science policy regularly (not necessarily exclusively), with an interest in reflecting on their professional practices and ethics. Applicants can be freelancers or staff, and can work for print, web, or broadcast media.

More information

ESOF 2018 Nature Travel Grants

Springer Nature is a leading research, educational and professional publisher, providing quality content to its communities through a range of innovative platforms, products and services and is home of trusted brands including Nature Research.

Nature Research has supported ESOF since its very first meeting in 2004 and is funding the Nature Travel Grant Scheme for journalists to attend ESOF2018 with the aim of increasing the impact of ESOF. The Nature Travel Grant Scheme offers a lump sum of £400 for journalists based in Europe and £800 for journalists based outside of Europe, to help cover the costs of travel and accommodation to attend ESOF2018.

More information

Good luck!

(My previous posting about this ESOF 2018 was Sept. 4, 2017 [scroll down about 50% of the way] should you be curious.)

Tracking artificial intelligence

Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),

Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.

In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.

For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.

Getting back to the matter at hand, a December 21, 2017 Stanford University press release by Andrew Myers, which originated the news item, provides more detail about the AI index,

“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”

The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.

Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.

Baseline metrics

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.

“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.

Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.

“The AI Index will succeed only if it becomes a community effort,” Shoham said.

Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.

Intriguing findings

Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.

In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells

Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.

In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.

“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”

The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.

You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,

As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.

Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)

Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?

I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.

Beautiful solar cells based on insect eyes

What a gorgeous image!

The compound eye of a fly inspired Stanford researchers to create a compound solar cell consisting of perovskite microcells encapsulated in a hexagon-shaped scaffold. (Image credit: Thomas Shahan/Creative Commons)

An August 31, 2017 news item on Nanowerk describes research into solar cells being performed at Stanford University (Note: A link has been removed),

Packing tiny solar cells together, like micro-lenses in the compound eye of an insect, could pave the way to a new generation of advanced photovoltaics, say Stanford University scientists.

In a new study, the Stanford team used the insect-inspired design to protect a fragile photovoltaic material called perovskite from deteriorating when exposed to heat, moisture or mechanical stress. The results are published in the journal Energy & Environmental Science (“Scaffold-reinforced perovskite compound solar cells”).

An August 31, 2017 Stanford University news release (also on EurekAlert) by Mark Schwartz, which originated the news item,

“Perovskites are promising, low-cost materials that convert sunlight to electricity as efficiently as conventional solar cells made of silicon,” said Reinhold Dauskardt, a professor of materials science and engineering and senior author of the study. “The problem is that perovskites are extremely unstable and mechanically fragile. They would barely survive the manufacturing process, let alone be durable long term in the environment.”

Most solar devices, like rooftop panels, use a flat, or planar, design. But that approach doesn’t work well with perovskite solar cells.

“Perovskites are the most fragile materials ever tested in the history of our lab,” said graduate student Nicholas Rolston, a co-lead author of the E&ES study. “This fragility is related to the brittle, salt-like crystal structure of perovskite, which has mechanical properties similar to table salt.”

Eye of the fly

To address the durability challenge, the Stanford team turned to nature.

“We were inspired by the compound eye of the fly, which consists of hundreds of tiny segmented eyes,” Dauskardt explained. “It has a beautiful honeycomb shape with built-in redundancy: If you lose one segment, hundreds of others will operate. Each segment is very fragile, but it’s shielded by a scaffold wall around it.”

Scaffolds in a compound solar cell filled with perovskite after fracture testing.

Scaffolds in a compound solar cell filled with perovskite after fracture testing. (Image credit: Dauskardt Lab/Stanford University)

Using the compound eye as a model, the researchers created a compound solar cell consisting of a vast honeycomb of perovskite microcells, each encapsulated in a hexagon-shaped scaffold just 0.02 inches (500 microns) wide.

“The scaffold is made of an inexpensive epoxy resin widely used in the microelectronics industry,” Rolston said. “It’s resilient to mechanical stresses and thus far more resistant to fracture.”

Tests conducted during the study revealed that the scaffolding had little effect on how efficiently perovskite converted light into electricity.

“We got nearly the same power-conversion efficiencies out of each little perovskite cell that we would get from a planar solar cell,” Dauskardt said. “So we achieved a huge increase in fracture resistance with no penalty for efficiency.”

Durability

But could the new device withstand the kind of heat and humidity that conventional rooftop solar panels endure?

To find out, the researchers exposed encapsulated perovskite cells to temperatures of 185 F (85 C) and 85 percent relative humidity for six weeks. Despite these extreme conditions, the cells continued to generate electricity at relatively high rates of efficiency.

Dauskardt and his colleagues have filed a provisional patent for the new technology. To improve efficiency, they are studying new ways to scatter light from the scaffold into the perovskite core of each cell.

“We are very excited about these results,” he said. “It’s a new way of thinking about designing solar cells. These scaffold cells also look really cool, so there are some interesting aesthetic possibilities for real-world applications.”

Researchers have also made this image available,

Caption: A compound solar cell illuminated from a light source below. Hexagonal scaffolds are visible in the regions coated by a silver electrode. The new solar cell design could help scientists overcome a major roadblock to the development of perovskite photovoltaics. Credit: Dauskardt Lab/Stanford University

Not quite as weirdly beautiful as the insect eyes.

Here’s a link to and a citation for the paper,

Scaffold-reinforced perovskite compound solar cells by Brian L. Watson, Nicholas Rolston, Adam D. Printz, and Reinhold H. Dauskardt. Energy & Environmental Science 2017 DOI: 10.1039/C7EE02185B first published on 23 Aug 2017

This paper is behind a paywall.

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

Machine learning programs learn bias

The notion of bias in artificial intelligence (AI)/algorithms/robots is gaining prominence (links to other posts featuring algorithms and bias are at the end of this post). The latest research concerns machine learning where an artificial intelligence system trains itself with ordinary human language from the internet. From an April 13, 2017 American Association for the Advancement of Science (AAAS) news release on EurekAlert,

As artificial intelligence systems “learn” language from existing texts, they exhibit the same biases that humans do, a new study reveals. The results not only provide a tool for studying prejudicial attitudes and behavior in humans, but also emphasize how language is intimately intertwined with historical biases and cultural stereotypes. A common way to measure biases in humans is the Implicit Association Test (IAT), where subjects are asked to pair two concepts they find similar, in contrast to two concepts they find different; their response times can vary greatly, indicating how well they associated one word with another (for example, people are more likely to associate “flowers” with “pleasant,” and “insects” with “unpleasant”). Here, Aylin Caliskan and colleagues developed a similar way to measure biases in AI systems that acquire language from human texts; rather than measuring lag time, however, they used the statistical number of associations between words, analyzing roughly 2.2 million words in total. Their results demonstrate that AI systems retain biases seen in humans. For example, studies of human behavior show that the exact same resume is 50% more likely to result in an opportunity for an interview if the candidate’s name is European American rather than African-American. Indeed, the AI system was more likely to associate European American names with “pleasant” stimuli (e.g. “gift,” or “happy”). In terms of gender, the AI system also reflected human biases, where female words (e.g., “woman” and “girl”) were more associated than male words with the arts, compared to mathematics. In a related Perspective, Anthony G. Greenwald discusses these findings and how they could be used to further analyze biases in the real world.

There are more details about the research in this April 13, 2017 Princeton University news release on EurekAlert (also on ScienceDaily),

In debates over the future of artificial intelligence, many experts think of the new systems as coldly logical and objectively rational. But in a new study, researchers have demonstrated how machines can be reflections of us, their creators, in potentially problematic ways. Common machine learning programs, when trained with ordinary human language available online, can acquire cultural biases embedded in the patterns of wording, the researchers found. These biases range from the morally neutral, like a preference for flowers over insects, to the objectionable views of race and gender.

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use to communicate, for instance in doing online text searches, image categorization and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said researcher Arvind Narayanan, an assistant professor of computer science and an affiliated faculty member at the Center for Information Technology Policy (CITP) at Princeton University, as well as an affiliate scholar at Stanford Law School’s Center for Internet and Society. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” published April 14  [2017] in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton; Joanna Bryson, a reader at University of Bath, and CITP affiliate, is a coauthor.

As a touchstone for documented human biases, the study turned to the Implicit Association Test, used in numerous social psychology studies since its development at the University of Washington in the late 1990s. The test measures response times (in milliseconds) by human subjects asked to pair word concepts displayed on a computer screen. Response times are far shorter, the Implicit Association Test has repeatedly shown, when subjects are asked to pair two concepts they find similar, versus two concepts they find dissimilar.

Take flower types, like “rose” and “daisy,” and insects like “ant” and “moth.” These words can be paired with pleasant concepts, like “caress” and “love,” or unpleasant notions, like “filth” and “ugly.” People more quickly associate the flower words with pleasant concepts, and the insect terms with unpleasant ideas.

The Princeton team devised an experiment with a program where it essentially functioned like a machine learning version of the Implicit Association Test. Called GloVe, and developed by Stanford University researchers, the popular, open-source program is of the sort that a startup machine learning company might use at the heart of its product. The GloVe algorithm can represent the co-occurrence statistics of words in, say, a 10-word window of text. Words that often appear near one another have a stronger association than those words that seldom do.

The Stanford researchers turned GloVe loose on a huge trawl of contents from the World Wide Web, containing 840 billion words. Within this large sample of written human culture, Narayanan and colleagues then examined sets of so-called target words, like “programmer, engineer, scientist” and “nurse, teacher, librarian” alongside two sets of attribute words, such as “man, male” and “woman, female,” looking for evidence of the kinds of biases humans can unwittingly possess.

In the results, innocent, inoffensive biases, like for flowers over bugs, showed up, but so did examples along lines of gender and race. As it turned out, the Princeton machine learning experiment managed to replicate the broad substantiations of bias found in select Implicit Association Test studies over the years that have relied on live, human subjects.

For instance, the machine learning program associated female names more with familial attribute words, like “parents” and “wedding,” than male names. In turn, male names had stronger associations with career attributes, like “professional” and “salary.” Of course, results such as these are often just objective reflections of the true, unequal distributions of occupation types with respect to gender–like how 77 percent of computer programmers are male, according to the U.S. Bureau of Labor Statistics.

Yet this correctly distinguished bias about occupations can end up having pernicious, sexist effects. An example: when foreign languages are naively processed by machine learning programs, leading to gender-stereotyped sentences. The Turkish language uses a gender-neutral, third person pronoun, “o.” Plugged into the well-known, online translation service Google Translate, however, the Turkish sentences “o bir doktor” and “o bir hem?ire” with this gender-neutral pronoun are translated into English as “he is a doctor” and “she is a nurse.”

“This paper reiterates the important point that machine learning methods are not ‘objective’ or ‘unbiased’ just because they rely on mathematics and algorithms,” said Hanna Wallach, a senior researcher at Microsoft Research New York City, who was not involved in the study. “Rather, as long as they are trained using data from society and as long as society exhibits biases, these methods will likely reproduce these biases.”

Another objectionable example harkens back to a well-known 2004 paper by Marianne Bertrand of the University of Chicago Booth School of Business and Sendhil Mullainathan of Harvard University. The economists sent out close to 5,000 identical resumes to 1,300 job advertisements, changing only the applicants’ names to be either traditionally European American or African American. The former group was 50 percent more likely to be offered an interview than the latter. In an apparent corroboration of this bias, the new Princeton study demonstrated that a set of African American names had more unpleasantness associations than a European American set.

Computer programmers might hope to prevent cultural stereotype perpetuation through the development of explicit, mathematics-based instructions for the machine learning programs underlying AI systems. Not unlike how parents and mentors try to instill concepts of fairness and equality in children and students, coders could endeavor to make machines reflect the better angels of human nature.

“The biases that we studied in the paper are easy to overlook when designers are creating systems,” said Narayanan. “The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.”

Here’s a link to and a citation for the Princeton paper,

Semantics derived automatically from language corpora contain human-like biases by Aylin Caliskan, Joanna J. Bryson, Arvind Narayanan. Science  14 Apr 2017: Vol. 356, Issue 6334, pp. 183-186 DOI: 10.1126/science.aal4230

This paper appears to be open access.

Links to more cautionary posts about AI,

Aug 5, 2009: Autonomous algorithms; intelligent windows; pretty nano pictures

June 14, 2016:  Accountability for artificial intelligence decision-making

Oct. 25, 2016 Removing gender-based stereotypes from algorithms

March 1, 2017: Algorithms in decision-making: a government inquiry in the UK

There’s also a book which makes some of the current use of AI programmes and big data quite accessible reading: Cathy O’Neil’s ‘Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy’.

Gamechanging electronics with new ultrafast, flexible, and transparent electronics

There are two news bits about game-changing electronics, one from the UK and the other from the US.

United Kingdom (UK)

An April 3, 2017 news item on Azonano announces the possibility of a future golden age of electronics courtesy of the University of Exeter,

Engineering experts from the University of Exeter have come up with a breakthrough way to create the smallest, quickest, highest-capacity memories for transparent and flexible applications that could lead to a future golden age of electronics.

A March 31, 2017 University of Exeter press release (also on EurekAlert), which originated the news item, expands on the theme (Note: Links have been removed),

Engineering experts from the University of Exeter have developed innovative new memory using a hybrid of graphene oxide and titanium oxide. Their devices are low cost and eco-friendly to produce, are also perfectly suited for use in flexible electronic devices such as ‘bendable’ mobile phone, computer and television screens, and even ‘intelligent’ clothing.

Crucially, these devices may also have the potential to offer a cheaper and more adaptable alternative to ‘flash memory’, which is currently used in many common devices such as memory cards, graphics cards and USB computer drives.

The research team insist that these innovative new devices have the potential to revolutionise not only how data is stored, but also take flexible electronics to a new age in terms of speed, efficiency and power.

Professor David Wright, an Electronic Engineering expert from the University of Exeter and lead author of the paper said: “Using graphene oxide to produce memory devices has been reported before, but they were typically very large, slow, and aimed at the ‘cheap and cheerful’ end of the electronics goods market.

“Our hybrid graphene oxide-titanium oxide memory is, in contrast, just 50 nanometres long and 8 nanometres thick and can be written to and read from in less than five nanoseconds – with one nanometre being one billionth of a metre and one nanosecond a billionth of a second.”

Professor Craciun, a co-author of the work, added: “Being able to improve data storage is the backbone of tomorrow’s knowledge economy, as well as industry on a global scale. Our work offers the opportunity to completely transform graphene-oxide memory technology, and the potential and possibilities it offers.”

Here’s a link to and a citation for the paper,

Multilevel Ultrafast Flexible Nanoscale Nonvolatile Hybrid Graphene Oxide–Titanium Oxide Memories by V. Karthik Nagareddy, Matthew D. Barnes, Federico Zipoli, Khue T. Lai, Arseny M. Alexeev, Monica Felicia Craciun, and C. David Wright. ACS Nano, 2017, 11 (3), pp 3010–3021 DOI: 10.1021/acsnano.6b08668 Publication Date (Web): February 21, 2017

Copyright © 2017 American Chemical Society

This paper appears to be open access.

United States (US)

Researchers from Stanford University have developed flexible, biodegradable electronics.

A newly developed flexible, biodegradable semiconductor developed by Stanford engineers shown on a human hair. (Image credit: Bao lab)

A human hair? That’s amazing and this May 3, 2017 news item on Nanowerk reveals more,

As electronics become increasingly pervasive in our lives – from smart phones to wearable sensors – so too does the ever rising amount of electronic waste they create. A United Nations Environment Program report found that almost 50 million tons of electronic waste were thrown out in 2017–more than 20 percent higher than waste in 2015.

Troubled by this mounting waste, Stanford engineer Zhenan Bao and her team are rethinking electronics. “In my group, we have been trying to mimic the function of human skin to think about how to develop future electronic devices,” Bao said. She described how skin is stretchable, self-healable and also biodegradable – an attractive list of characteristics for electronics. “We have achieved the first two [flexible and self-healing], so the biodegradability was something we wanted to tackle.”

The team created a flexible electronic device that can easily degrade just by adding a weak acid like vinegar. The results were published in the Proceedings of the National Academy of Sciences (“Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics”).

“This is the first example of a semiconductive polymer that can decompose,” said lead author Ting Lei, a postdoctoral fellow working with Bao.

A May 1, 2017 Stanford University news release by Sarah Derouin, which originated the news item, provides more detail,

In addition to the polymer – essentially a flexible, conductive plastic – the team developed a degradable electronic circuit and a new biodegradable substrate material for mounting the electrical components. This substrate supports the electrical components, flexing and molding to rough and smooth surfaces alike. When the electronic device is no longer needed, the whole thing can biodegrade into nontoxic components.

Biodegradable bits

Bao, a professor of chemical engineering and materials science and engineering, had previously created a stretchable electrode modeled on human skin. That material could bend and twist in a way that could allow it to interface with the skin or brain, but it couldn’t degrade. That limited its application for implantable devices and – important to Bao – contributed to waste.

Flexible, biodegradable semiconductor on an avacado

The flexible semiconductor can adhere to smooth or rough surfaces and biodegrade to nontoxic products. (Image credit: Bao lab)

Bao said that creating a robust material that is both a good electrical conductor and biodegradable was a challenge, considering traditional polymer chemistry. “We have been trying to think how we can achieve both great electronic property but also have the biodegradability,” Bao said.

Eventually, the team found that by tweaking the chemical structure of the flexible material it would break apart under mild stressors. “We came up with an idea of making these molecules using a special type of chemical linkage that can retain the ability for the electron to smoothly transport along the molecule,” Bao said. “But also this chemical bond is sensitive to weak acid – even weaker than pure vinegar.” The result was a material that could carry an electronic signal but break down without requiring extreme measures.

In addition to the biodegradable polymer, the team developed a new type of electrical component and a substrate material that attaches to the entire electronic component. Electronic components are usually made of gold. But for this device, the researchers crafted components from iron. Bao noted that iron is a very environmentally friendly product and is nontoxic to humans.

The researchers created the substrate, which carries the electronic circuit and the polymer, from cellulose. Cellulose is the same substance that makes up paper. But unlike paper, the team altered cellulose fibers so the “paper” is transparent and flexible, while still breaking down easily. The thin film substrate allows the electronics to be worn on the skin or even implanted inside the body.

From implants to plants

The combination of a biodegradable conductive polymer and substrate makes the electronic device useful in a plethora of settings – from wearable electronics to large-scale environmental surveys with sensor dusts.

“We envision these soft patches that are very thin and conformable to the skin that can measure blood pressure, glucose value, sweat content,” Bao said. A person could wear a specifically designed patch for a day or week, then download the data. According to Bao, this short-term use of disposable electronics seems a perfect fit for a degradable, flexible design.

And it’s not just for skin surveys: the biodegradable substrate, polymers and iron electrodes make the entire component compatible with insertion into the human body. The polymer breaks down to product concentrations much lower than the published acceptable levels found in drinking water. Although the polymer was found to be biocompatible, Bao said that more studies would need to be done before implants are a regular occurrence.

Biodegradable electronics have the potential to go far beyond collecting heart disease and glucose data. These components could be used in places where surveys cover large areas in remote locations. Lei described a research scenario where biodegradable electronics are dropped by airplane over a forest to survey the landscape. “It’s a very large area and very hard for people to spread the sensors,” he said. “Also, if you spread the sensors, it’s very hard to gather them back. You don’t want to contaminate the environment so we need something that can be decomposed.” Instead of plastic littering the forest floor, the sensors would biodegrade away.

As the number of electronics increase, biodegradability will become more important. Lei is excited by their advancements and wants to keep improving performance of biodegradable electronics. “We currently have computers and cell phones and we generate millions and billions of cell phones, and it’s hard to decompose,” he said. “We hope we can develop some materials that can be decomposed so there is less waste.”

Other authors on the study include Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister McGuire, and Jeffrey Tok of Stanford University; Tsung-Ching Huang of Hewlett Packard Enterprise; and Lei-Lai Shao and Kwang-Ting Cheng of University of California, Santa Barbara.

The research was funded by the Air Force Office for Scientific Research; BASF; Marie Curie Cofund; Beatriu de Pinós fellowship; and the Kodak Graduate Fellowship.

Here’s a link to and a citation for the team’s latest paper,

Biocompatible and totally disintegrable semiconducting polymer for ultrathin and ultralightweight transient electronics by Ting Lei, Ming Guan, Jia Liu, Hung-Cheng Lin, Raphael Pfattner, Leo Shaw, Allister F. McGuire, Tsung-Ching Huang, Leilai Shao, Kwang-Ting Cheng, Jeffrey B.-H. Tok, and Zhenan Bao. PNAS 2017 doi: 10.1073/pnas.1701478114 published ahead of print May 1, 2017

This paper is behind a paywall.

The mention of cellulose in the second item piqued my interest so I checked to see if they’d used nanocellulose. No, they did not. Microcrystalline cellulose powder was used to constitute a cellulose film but they found a way to render this film at the nanoscale. From the Stanford paper (Note: Links have been removed),

… Moreover, cellulose films have been previously used as biodegradable substrates in electronics (28⇓–30). However, these cellulose films are typically made with thicknesses well over 10 μm and thus cannot be used to fabricate ultrathin electronics with substrate thicknesses below 1–2 μm (7, 18, 19). To the best of our knowledge, there have been no reports on ultrathin (1–2 μm) biodegradable substrates for electronics. Thus, to realize them, we subsequently developed a method described herein to obtain ultrathin (800 nm) cellulose films (Fig. 1B and SI Appendix, Fig. S8). First, microcrystalline cellulose powders were dissolved in LiCl/N,N-dimethylacetamide (DMAc) and reacted with hexamethyldisilazane (HMDS) (31, 32), providing trimethylsilyl-functionalized cellulose (TMSC) (Fig. 1B). To fabricate films or devices, TMSC in chlorobenzene (CB) (70 mg/mL) was spin-coated on a thin dextran sacrificial layer. The TMSC film was measured to be 1.2 μm. After hydrolyzing the film in 95% acetic acid vapor for 2 h, the trimethylsilyl groups were removed, giving a 400-nm-thick cellulose film. The film thickness significantly decreased to one-third of the original film thickness, largely due to the removal of the bulky trimethylsilyl groups. The hydrolyzed cellulose film is insoluble in most organic solvents, for example, toluene, THF, chloroform, CB, and water. Thus, we can sequentially repeat the above steps to obtain an 800-nm-thick film, which is robust enough for further device fabrication and peel-off. By soaking the device in water, the dextran layer is dissolved, starting from the edges of the device to the center. This process ultimately releases the ultrathin substrate and leaves it floating on water surface (Fig. 3A, Inset).

Finally, I don’t have any grand thoughts; it’s just interesting to see different approaches to flexible electronics.

Vector Institute and Canada’s artificial intelligence sector

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

In addition, Vector is expected to receive funding from the Province of Ontario and more than 30 top Canadian and global companies eager to tap this pool of talent to grow their businesses. The institute will also work closely with other Ontario universities with AI talent.

(See my March 24, 2017 posting; scroll down about 25% for the science part, including the Pan-Canadian Artificial Intelligence Strategy of the budget.)

Not obvious in last week’s coverage of the Pan-Canadian Artificial Intelligence Strategy is that the much lauded Hinton has been living in the US and working for Google. These latest announcements (Pan-Canadian AI Strategy and Vector Institute) mean that he’s moving back.

A March 28, 2017 article by Kate Allen for TorontoStar.com provides more details about the Vector Institute, Hinton, and the Canadian ‘brain drain’ as it applies to artificial intelligence, (Note:  A link has been removed)

Toronto will host a new institute devoted to artificial intelligence, a major gambit to bolster a field of research pioneered in Canada but consistently drained of talent by major U.S. technology companies like Google, Facebook and Microsoft.

The Vector Institute, an independent non-profit affiliated with the University of Toronto, will hire about 25 new faculty and research scientists. It will be backed by more than $150 million in public and corporate funding in an unusual hybridization of pure research and business-minded commercial goals.

The province will spend $50 million over five years, while the federal government, which announced a $125-million Pan-Canadian Artificial Intelligence Strategy in last week’s budget, is providing at least $40 million, backers say. More than two dozen companies have committed millions more over 10 years, including $5 million each from sponsors including Google, Air Canada, Loblaws, and Canada’s five biggest banks [Bank of Montreal (BMO). Canadian Imperial Bank of Commerce ({CIBC} President’s Choice Financial},  Royal Bank of Canada (RBC), Scotiabank (Tangerine), Toronto-Dominion Bank (TD Canada Trust)].

The mode of artificial intelligence that the Vector Institute will focus on, deep learning, has seen remarkable results in recent years, particularly in image and speech recognition. Geoffrey Hinton, considered the “godfather” of deep learning for the breakthroughs he made while a professor at U of T, has worked for Google since 2013 in California and Toronto.

Hinton will move back to Canada to lead a research team based at the tech giant’s Toronto offices and act as chief scientific adviser of the new institute.

Researchers trained in Canadian artificial intelligence labs fill the ranks of major technology companies, working on tools like instant language translation, facial recognition, and recommendation services. Academic institutions and startups in Toronto, Waterloo, Montreal and Edmonton boast leaders in the field, but other researchers have left for U.S. universities and corporate labs.

The goals of the Vector Institute are to retain, repatriate and attract AI talent, to create more trained experts, and to feed that expertise into existing Canadian companies and startups.

Hospitals are expected to be a major partner, since health care is an intriguing application for AI. Last month, researchers from Stanford University announced they had trained a deep learning algorithm to identify potentially cancerous skin lesions with accuracy comparable to human dermatologists. The Toronto company Deep Genomics is using deep learning to read genomes and identify mutations that may lead to disease, among other things.

Intelligent algorithms can also be applied to tasks that might seem less virtuous, like reading private data to better target advertising. Zemel [Richard Zemel, the institute’s research director and a professor of computer science at U of T] says the centre is creating an ethics working group [emphasis mine] and maintaining ties with organizations that promote fairness and transparency in machine learning. As for privacy concerns, “that’s something we are well aware of. We don’t have a well-formed policy yet but we will fairly soon.”

The institute’s annual funding pales in comparison to the revenues of the American tech giants, which are measured in tens of billions. The risk the institute’s backers are taking is simply creating an even more robust machine learning PhD mill for the U.S.

“They obviously won’t all stay in Canada, but Toronto industry is very keen to get them,” Hinton said. “I think Trump might help there.” Two researchers on Hinton’s new Toronto-based team are Iranian, one of the countries targeted by U.S. President Donald Trump’s travel bans.

Ethics do seem to be a bit of an afterthought. Presumably the Vector Institute’s ‘ethics working group’ won’t include any regular folks. Is there any thought to what the rest of us think about these developments? As there will also be some collaboration with other proposed AI institutes including ones at the University of Montreal (Université de Montréal) and the University of Alberta (Kate McGillivray’s article coming up shortly mentions them), might the ethics group be centered in either Edmonton or Montreal? Interestingly, two Canadians (Timothy Caulfield at the University of Alberta and Eric Racine at Université de Montréa) testified at the US Commission for the Study of Bioethical Issues Feb. 10 – 11, 2014 meeting, the Brain research, ethics, and nanotechnology. Still speculating here but I imagine Caulfield and/or Racine could be persuaded to extend their expertise in ethics and the human brain to AI and its neural networks.

Getting back to the topic at hand the ‘AI sceneCanada’, Allen’s article is worth reading in its entirety if you have the time.

Kate McGillivray’s March 29, 2017 article for the Canadian Broadcasting Corporation’s (CBC) news online provides more details about the Canadian AI situation and the new strategies,

With artificial intelligence set to transform our world, a new institute is putting Toronto to the front of the line to lead the charge.

The Vector Institute for Artificial Intelligence, made possible by funding from the federal government revealed in the 2017 budget, will move into new digs in the MaRS Discovery District by the end of the year.

Vector’s funding comes partially from a $125 million investment announced in last Wednesday’s federal budget to launch a pan-Canadian artificial intelligence strategy, with similar institutes being established in Montreal and Edmonton.

“[A.I.] cuts across pretty well every sector of the economy,” said Dr. Alan Bernstein, CEO and president of the Canadian Institute for Advanced Research, the organization tasked with administering the federal program.

“Silicon Valley and England and other places really jumped on it, so we kind of lost the lead a little bit. I think the Canadian federal government has now realized that,” he said.

Stopping up the brain drain

Critical to the strategy’s success is building a homegrown base of A.I. experts and innovators — a problem in the last decade, despite pioneering work on so-called “Deep Learning” by Canadian scholars such as Yoshua Bengio and Geoffrey Hinton, a former University of Toronto professor who will now serve as Vector’s chief scientific advisor.

With few university faculty positions in Canada and with many innovative companies headquartered elsewhere, it has been tough to keep the few graduates specializing in A.I. in town.

“We were paying to educate people and shipping them south,” explained Ed Clark, chair of the Vector Institute and business advisor to Ontario Premier Kathleen Wynne.

The existence of that “fantastic science” will lean heavily on how much buy-in Vector and Canada’s other two A.I. centres get.

Toronto’s portion of the $125 million is a “great start,” said Bernstein, but taken alone, “it’s not enough money.”

“My estimate of the right amount of money to make a difference is a half a billion or so, and I think we will get there,” he said.

Jessica Murphy’s March 29, 2017 article for the British Broadcasting Corporation’s (BBC) news online offers some intriguing detail about the Canadian AI scene,

Canadian researchers have been behind some recent major breakthroughs in artificial intelligence. Now, the country is betting on becoming a big player in one of the hottest fields in technology, with help from the likes of Google and RBC [Royal Bank of Canada].

In an unassuming building on the University of Toronto’s downtown campus, Geoff Hinton laboured for years on the “lunatic fringe” of academia and artificial intelligence, pursuing research in an area of AI called neural networks.

Also known as “deep learning”, neural networks are computer programs that learn in similar way to human brains. The field showed early promise in the 1980s, but the tech sector turned its attention to other AI methods after that promise seemed slow to develop.

“The approaches that I thought were silly were in the ascendancy and the approach that I thought was the right approach was regarded as silly,” says the British-born [emphasis mine] professor, who splits his time between the university and Google, where he is a vice-president of engineering fellow.

Neural networks are used by the likes of Netflix to recommend what you should binge watch and smartphones with voice assistance tools. Google DeepMind’s AlphaGo AI used them to win against a human in the ancient game of Go in 2016.

Foteini Agrafioti, who heads up the new RBC Research in Machine Learning lab at the University of Toronto, said those recent innovations made AI attractive to researchers and the tech industry.

“Anything that’s powering Google’s engines right now is powered by deep learning,” she says.

Developments in the field helped jumpstart innovation and paved the way for the technology’s commercialisation. They also captured the attention of Google, IBM and Microsoft, and kicked off a hiring race in the field.

The renewed focus on neural networks has boosted the careers of early Canadian AI machine learning pioneers like Hinton, the University of Montreal’s Yoshua Bengio, and University of Alberta’s Richard Sutton.

Money from big tech is coming north, along with investments by domestic corporations like banking multinational RBC and auto parts giant Magna, and millions of dollars in government funding.

Former banking executive Ed Clark will head the institute, and says the goal is to make Toronto, which has the largest concentration of AI-related industries in Canada, one of the top five places in the world for AI innovation and business.

The founders also want it to serve as a magnet and retention tool for top talent aggressively head-hunted by US firms.

Clark says they want to “wake up” Canadian industry to the possibilities of AI, which is expected to have a massive impact on fields like healthcare, banking, manufacturing and transportation.

Google invested C$4.5m (US$3.4m/£2.7m) last November [2016] in the University of Montreal’s Montreal Institute for Learning Algorithms.

Microsoft is funding a Montreal startup, Element AI. The Seattle-based company also announced it would acquire Montreal-based Maluuba and help fund AI research at the University of Montreal and McGill University.

Thomson Reuters and General Motors both recently moved AI labs to Toronto.

RBC is also investing in the future of AI in Canada, including opening a machine learning lab headed by Agrafioti, co-funding a program to bring global AI talent and entrepreneurs to Toronto, and collaborating with Sutton and the University of Alberta’s Machine Intelligence Institute.

Canadian tech also sees the travel uncertainty created by the Trump administration in the US as making Canada more attractive to foreign talent. (One of Clark’s the selling points is that Toronto as an “open and diverse” city).

This may reverse the ‘brain drain’ but it appears Canada’s role as a ‘branch plant economy’ for foreign (usually US) companies could become an important discussion once more. From the ‘Foreign ownership of companies of Canada’ Wikipedia entry (Note: Links have been removed),

Historically, foreign ownership was a political issue in Canada in the late 1960s and early 1970s, when it was believed by some that U.S. investment had reached new heights (though its levels had actually remained stable for decades), and then in the 1980s, during debates over the Free Trade Agreement.

But the situation has changed, since in the interim period Canada itself became a major investor and owner of foreign corporations. Since the 1980s, Canada’s levels of investment and ownership in foreign companies have been larger than foreign investment and ownership in Canada. In some smaller countries, such as Montenegro, Canadian investment is sizable enough to make up a major portion of the economy. In Northern Ireland, for example, Canada is the largest foreign investor. By becoming foreign owners themselves, Canadians have become far less politically concerned about investment within Canada.

Of note is that Canada’s largest companies by value, and largest employers, tend to be foreign-owned in a way that is more typical of a developing nation than a G8 member. The best example is the automotive sector, one of Canada’s most important industries. It is dominated by American, German, and Japanese giants. Although this situation is not unique to Canada in the global context, it is unique among G-8 nations, and many other relatively small nations also have national automotive companies.

It’s interesting to note that sometimes Canadian companies are the big investors but that doesn’t change our basic position. And, as I’ve noted in other postings (including the March 24, 2017 posting), these government investments in science and technology won’t necessarily lead to a move away from our ‘branch plant economy’ towards an innovative Canada.

You can find out more about the Vector Institute for Artificial Intelligence here.

BTW, I noted that reference to Hinton as ‘British-born’ in the BBC article. He was educated in the UK and subsidized by UK taxpayers (from his Wikipedia entry; Note: Links have been removed),

Hinton was educated at King’s College, Cambridge graduating in 1970, with a Bachelor of Arts in experimental psychology.[1] He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1977 for research supervised by H. Christopher Longuet-Higgins.[3][12]

It seems Canadians are not the only ones to experience  ‘brain drains’.

Finally, I wrote at length about a recent initiative taking place between the University of British Columbia (Vancouver, Canada) and the University of Washington (Seattle, Washington), the Cascadia Urban Analytics Cooperative in a Feb. 28, 2017 posting noting that the initiative is being funded by Microsoft to the tune $1M and is part of a larger cooperative effort between the province of British Columbia and the state of Washington. Artificial intelligence is not the only area where US technology companies are hedging their bets (against Trump’s administration which seems determined to terrify people from crossing US borders) by investing in Canada.

For anyone interested in a little more information about AI in the US and China, there’s today’s (March 31, 2017)earlier posting: China, US, and the race for artificial intelligence research domination.

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).