Tag Archives: Stanford University

High-performance, low-energy artificial synapse for neural network computing

This artificial synapse is apparently an improvement on the standard memristor-based artificial synapse but that doesn’t become clear until reading the abstract for the paper. First, there’s a Feb. 20, 2017 Stanford University news release by Taylor Kubota (dated Feb. 21, 2017 on EurekAlert), Note: Links have been removed,

For all the improvements in computer technology over the years, we still struggle to recreate the low-energy, elegant processing of the human brain. Now, researchers at Stanford University and Sandia National Laboratories have made an advance that could help computers mimic one piece of the brain’s efficient design – an artificial version of the space over which neurons communicate, called a synapse.

“It works like a real synapse but it’s an organic electronic device that can be engineered,” said Alberto Salleo, associate professor of materials science and engineering at Stanford and senior author of the paper. “It’s an entirely new family of devices because this type of architecture has not been shown before. For many key metrics, it also performs better than anything that’s been done before with inorganics.”

The new artificial synapse, reported in the Feb. 20 issue of Nature Materials, mimics the way synapses in the brain learn through the signals that cross them. This is a significant energy savings over traditional computing, which involves separately processing information and then storing it into memory. Here, the processing creates the memory.

This synapse may one day be part of a more brain-like computer, which could be especially beneficial for computing that works with visual and auditory signals. Examples of this are seen in voice-controlled interfaces and driverless cars. Past efforts in this field have produced high-performance neural networks supported by artificially intelligent algorithms but these are still distant imitators of the brain that depend on energy-consuming traditional computer hardware.

Building a brain

When we learn, electrical signals are sent between neurons in our brain. The most energy is needed the first time a synapse is traversed. Every time afterward, the connection requires less energy. This is how synapses efficiently facilitate both learning something new and remembering what we’ve learned. The artificial synapse, unlike most other versions of brain-like computing, also fulfills these two tasks simultaneously, and does so with substantial energy savings.

“Deep learning algorithms are very powerful but they rely on processors to calculate and simulate the electrical states and store them somewhere else, which is inefficient in terms of energy and time,” said Yoeri van de Burgt, former postdoctoral scholar in the Salleo lab and lead author of the paper. “Instead of simulating a neural network, our work is trying to make a neural network.”

The artificial synapse is based off a battery design. It consists of two thin, flexible films with three terminals, connected by an electrolyte of salty water. The device works as a transistor, with one of the terminals controlling the flow of electricity between the other two.

Like a neural path in a brain being reinforced through learning, the researchers program the artificial synapse by discharging and recharging it repeatedly. Through this training, they have been able to predict within 1 percent of uncertainly what voltage will be required to get the synapse to a specific electrical state and, once there, it remains at that state. In other words, unlike a common computer, where you save your work to the hard drive before you turn it off, the artificial synapse can recall its programming without any additional actions or parts.

Testing a network of artificial synapses

Only one artificial synapse has been produced but researchers at Sandia used 15,000 measurements from experiments on that synapse to simulate how an array of them would work in a neural network. They tested the simulated network’s ability to recognize handwriting of digits 0 through 9. Tested on three datasets, the simulated array was able to identify the handwritten digits with an accuracy between 93 to 97 percent.

Although this task would be relatively simple for a person, traditional computers have a difficult time interpreting visual and auditory signals.

“More and more, the kinds of tasks that we expect our computing devices to do require computing that mimics the brain because using traditional computing to perform these tasks is becoming really power hungry,” said A. Alec Talin, distinguished member of technical staff at Sandia National Laboratories in Livermore, California, and senior author of the paper. “We’ve demonstrated a device that’s ideal for running these type of algorithms and that consumes a lot less power.”

This device is extremely well suited for the kind of signal identification and classification that traditional computers struggle to perform. Whereas digital transistors can be in only two states, such as 0 and 1, the researchers successfully programmed 500 states in the artificial synapse, which is useful for neuron-type computation models. In switching from one state to another they used about one-tenth as much energy as a state-of-the-art computing system needs in order to move data from the processing unit to the memory.

This, however, means they are still using about 10,000 times as much energy as the minimum a biological synapse needs in order to fire. The researchers are hopeful that they can attain neuron-level energy efficiency once they test the artificial synapse in smaller devices.

Organic potential

Every part of the device is made of inexpensive organic materials. These aren’t found in nature but they are largely composed of hydrogen and carbon and are compatible with the brain’s chemistry. Cells have been grown on these materials and they have even been used to make artificial pumps for neural transmitters. The voltages applied to train the artificial synapse are also the same as those that move through human neurons.

All this means it’s possible that the artificial synapse could communicate with live neurons, leading to improved brain-machine interfaces. The softness and flexibility of the device also lends itself to being used in biological environments. Before any applications to biology, however, the team plans to build an actual array of artificial synapses for further research and testing.

Additional Stanford co-authors of this work include co-lead author Ewout Lubberman, also of the University of Groningen in the Netherlands, Scott T. Keene and Grégorio C. Faria, also of Universidade de São Paulo, in Brazil. Sandia National Laboratories co-authors include Elliot J. Fuller and Sapan Agarwal in Livermore and Matthew J. Marinella in Albuquerque, New Mexico. Salleo is an affiliate of the Stanford Precourt Institute for Energy and the Stanford Neurosciences Institute. Van de Burgt is now an assistant professor in microsystems and an affiliate of the Institute for Complex Molecular Studies (ICMS) at Eindhoven University of Technology in the Netherlands.

This research was funded by the National Science Foundation, the Keck Faculty Scholar Funds, the Neurofab at Stanford, the Stanford Graduate Fellowship, Sandia’s Laboratory-Directed Research and Development Program, the U.S. Department of Energy, the Holland Scholarship, the University of Groningen Scholarship for Excellent Students, the Hendrik Muller National Fund, the Schuurman Schimmel-van Outeren Foundation, the Foundation of Renswoude (The Hague and Delft), the Marco Polo Fund, the Instituto Nacional de Ciência e Tecnologia/Instituto Nacional de Eletrônica Orgânica in Brazil, the Fundação de Amparo à Pesquisa do Estado de São Paulo and the Brazilian National Council.

Here’s an abstract for the researchers’ paper (link to paper provided after abstract) and it’s where you’ll find the memristor connection explained,

The brain is capable of massively parallel information processing while consuming only ~1–100fJ per synaptic event1, 2. Inspired by the efficiency of the brain, CMOS-based neural architectures3 and memristors4, 5 are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS architectures, and the stochastic and energy-costly switching of memristors complicate the path to achieve the interconnectivity, information density, and energy efficiency of the brain using either approach. Here we describe an electrochemical neuromorphic organic device (ENODe) operating with a fundamentally different mechanism from existing memristors. ENODe switches at low voltage and energy (<10pJ for 103μm2 devices), displays >500 distinct, non-volatile conductance states within a ~1V range, and achieves high classification accuracy when implemented in neural network simulations. Plastic ENODes are also fabricated on flexible substrates enabling the integration of neuromorphic functionality in stretchable electronic systems6, 7. Mechanical flexibility makes ENODes compatible with three-dimensional architectures, opening a path towards extreme interconnectivity comparable to the human brain.

Here’s a link to and a citation for the paper,

A non-volatile organic electrochemical device as a low-voltage artificial synapse for neuromorphic computing by Yoeri van de Burgt, Ewout Lubberman, Elliot J. Fuller, Scott T. Keene, Grégorio C. Faria, Sapan Agarwal, Matthew J. Marinella, A. Alec Talin, & Alberto Salleo. Nature Materials (2017) doi:10.1038/nmat4856 Published online 20 February 2017

This paper is behind a paywall.

ETA March 8, 2017 10:28 PST: You may find this this piece on ferroelectricity and neuromorphic engineering of interest (March 7, 2017 posting titled: Ferroelectric roadmap to neuromorphic computing).

Going underground to observe atoms in a bid for better batteries

A Jan. 16, 2017 news item on ScienceDaily describes what lengths researchers at Stanford University (US) will go to in pursuit of their goals,

In a lab 18 feet below the Engineering Quad of Stanford University, researchers in the Dionne lab camped out with one of the most advanced microscopes in the world to capture an unimaginably small reaction.

The lab members conducted arduous experiments — sometimes requiring a continuous 30 hours of work — to capture real-time, dynamic visualizations of atoms that could someday help our phone batteries last longer and our electric vehicles go farther on a single charge.

Toiling underground in the tunneled labs, they recorded atoms moving in and out of nanoparticles less than 100 nanometers in size, with a resolution approaching 1 nanometer.

A Jan. 16, 2017 Stanford University news release (also on EurekAlert) by Taylor Kubota, which originated the news item, provides more detail,

“The ability to directly visualize reactions in real time with such high resolution will allow us to explore many unanswered questions in the chemical and physical sciences,” said Jen Dionne, associate professor of materials science and engineering at Stanford and senior author of the paper detailing this work, published Jan. 16 [2017] in Nature Communications. “While the experiments are not easy, they would not be possible without the remarkable advances in electron microscopy from the past decade.”

Their experiments focused on hydrogen moving into palladium, a class of reactions known as an intercalation-driven phase transition. This reaction is physically analogous to how ions flow through a battery or fuel cell during charging and discharging. Observing this process in real time provides insight into why nanoparticles make better electrodes than bulk materials and fits into Dionne’s larger interest in energy storage devices that can charge faster, hold more energy and stave off permanent failure.

Technical complexity and ghosts

For these experiments, the Dionne lab created palladium nanocubes, a form of nanoparticle, that ranged in size from about 15 to 80 nanometers, and then placed them in a hydrogen gas environment within an electron microscope. The researchers knew that hydrogen would change both the dimensions of the lattice and the electronic properties of the nanoparticle. They thought that, with the appropriate microscope lens and aperture configuration, techniques called scanning transmission electron microscopy and electron energy loss spectroscopy might show hydrogen uptake in real time.

After months of trial and error, the results were extremely detailed, real-time videos of the changes in the particle as hydrogen was introduced. The entire process was so complicated and novel that the first time it worked, the lab didn’t even have the video software running, leading them to capture their first movie success on a smartphone.

Following these videos, they examined the nanocubes during intermediate stages of hydrogenation using a second technique in the microscope, called dark-field imaging, which relies on scattered electrons. In order to pause the hydrogenation process, the researchers plunged the nanocubes into an ice bath of liquid nitrogen mid-reaction, dropping their temperature to 100 degrees Kelvin (-280 F). These dark-field images served as a way to check that the application of the electron beam hadn’t influenced the previous observations and allowed the researchers to see detailed structural changes during the reaction.

“With the average experiment spanning about 24 hours at this low temperature, we faced many instrument problems and called Ai Leen Koh [co-author and research scientist at Stanford’s Nano Shared Facilities] at the weirdest hours of the night,” recalled Fariah Hayee, co-lead author of the study and graduate student in the Dionne lab. “We even encountered a ‘ghost-of-the-joystick problem,’ where the joystick seemed to move the sample uncontrollably for some time.”

While most electron microscopes operate with the specimen held in a vacuum, the microscope used for this research has the advanced ability to allow the researchers to introduce liquids or gases to their specimen.

“We benefit tremendously from having access to one of the best microscope facilities in the world,” said Tarun Narayan, co-lead author of this study and recent doctoral graduate from the Dionne lab. “Without these specific tools, we wouldn’t be able to introduce hydrogen gas or cool down our samples enough to see these processes take place.”

Pushing out imperfections

Aside from being a widely applicable proof of concept for this suite of visualization techniques, watching the atoms move provides greater validation for the high hopes many scientists have for nanoparticle energy storage technologies.

The researchers saw the atoms move in through the corners of the nanocube and observed the formation of various imperfections within the particle as hydrogen moved within it. This sounds like an argument against the promise of nanoparticles but that’s because it’s not the whole story.

“The nanoparticle has the ability to self-heal,” said Dionne. “When you first introduce hydrogen, the particle deforms and loses its perfect crystallinity. But once the particle has absorbed as much hydrogen as it can, it transforms itself back to a perfect crystal again.”

The researchers describe this as imperfections being “pushed out” of the nanoparticle. This ability of the nanocube to self-heal makes it more durable, a key property needed for energy storage materials that can sustain many charge and discharge cycles.

Looking toward the future

As the efficiency of renewable energy generation increases, the need for higher quality energy storage is more pressing than ever. It’s likely that the future of storage will rely on new chemistries and the findings of this research, including the microscopy techniques the researchers refined along the way, will apply to nearly any solution in those categories.

For its part, the Dionne lab has many directions it can go from here. The team could look at a variety of material compositions, or compare how the sizes and shapes of nanoparticles affect the way they work, and, soon, take advantage of new upgrades to their microscope to study light-driven reactions. At present, Hayee has moved on to experimenting with nanorods, which have more surface area for the ions to move through, promising potentially even faster kinetics.

Here’s a link to and a citation for the paper,

Direct visualization of hydrogen absorption dynamics in individual palladium nanoparticles by Tarun C. Narayan, Fariah Hayee, Andrea Baldi, Ai Leen Koh, Robert Sinclair, & Jennifer A. Dionne. Nature Communications 8, Article number: 14020 (2017) doi:10.1038/ncomms14020 Published online: 16 January 2017

This paper is open access.

Novel self-assembly at 102 atoms

A Jan. 13, 2017 news item on ScienceDaily announces a discovery about self-assembly of 102-atom gold nanoclusters,

Self-assembly of matter is one of the fundamental principles of nature, directing the growth of larger ordered and functional systems from smaller building blocks. Self-assembly can be observed in all length scales from molecules to galaxies. Now, researchers at the Nanoscience Centre of the University of Jyväskylä and the HYBER Centre of Excellence of Aalto University in Finland report a novel discovery of self-assembling two- and three-dimensional materials that are formed by tiny gold nanoclusters of just a couple of nanometres in size, each having 102 gold atoms and a surface layer of 44 thiol molecules. The study, conducted with funding from the Academy of Finland and the European Research Council, has been published in Angewandte Chemie.

A Jan. 13, 2017 Academy of Finland press release, which originated the news item, provides more technical information about the work,

The atomic structure of the 102-atom gold nanocluster was first resolved by the group of Roger D Kornberg at Stanford University in 2007 (2). Since then, several further studies of its properties have been conducted in the Jyväskylä Nanoscience Centre, where it has also been used for electron microscopy imaging of virus structures (3). The thiol surface of the nanocluster has a large number of acidic groups that can form directed hydrogen bonds to neighbouring nanoclusters and initiate directed self-assembly.

The self-assembly of gold nanoclusters took place in a water-methanol mixture and produced two distinctly different superstructures that were imaged in a high-resolution electron microscope at Aalto University. In one of the structures, two-dimensional hexagonally ordered layers of gold nanoclusters were stacked together, each layer being just one nanocluster thick. Modifying the synthesis conditions, also three-dimensional spherical, hollow capsid structures were observed, where the thickness of the capsid wall corresponds again to just one nanocluster size (see figure).

While the details of the formation mechanisms of these superstructures warrant further systemic investigations, the initial observations open several new views into synthetically made self-assembling nanomaterials.

“Today, we know of several tens of different types of atomistically precise gold nanoclusters, and I believe they can exhibit a wide variety of self-assembling growth patterns that could produce a range of new meta-materials,” said Academy Professor Hannu Häkkinen, who coordinated the research at the Nanoscience Centre. “In biology, typical examples of self-assembling functional systems are viruses and vesicles. Biological self-assembled structures can also be de-assembled by gentle changes in the surrounding biochemical conditions. It’ll be of great interest to see whether these gold-based materials can be de-assembled and then re-assembled to different structures by changing something in the chemistry of the surrounding solvent.”

“The free-standing two-dimensional nanosheets will bring opportunities towards new-generation functional materials, and the hollow capsids will pave the way for highly lightweight colloidal framework materials,” Postdoctoral Researcher Nonappa (Aalto University) said.

Professor Olli Ikkala of Aalto University said: “In a broader framework, it has remained as a grand challenge to master the self-assemblies through all length scales to tune the functional properties of materials in a rational way. So far, it has been commonly considered sufficient to achieve sufficiently narrow size distributions of the constituent nanoscale structural units to achieve well-defined structures. The present findings suggest a paradigm change to pursue strictly defined nanoscale units for self-assemblies.”

References:

(1)    Nonappa, T. Lahtinen, J.S. Haataja, T.-R. Tero, H. Häkkinen and O. Ikkala, “Template-Free Supracolloidal Self-Assembly of Atomically Precise Gold Nanoclusters: From 2D Colloidal Crystals to Spherical Capsids”, Angewandte Chemie International Edition, published online 23 November 2016, DOI: 10.1002/anie.201609036

(2)    P. Jadzinsky et al., “Structure of a thiol-monolayer protected gold nanoparticle at 1.1Å resolution”, Science 318, 430 (2007)

(3)    V. Marjomäki et al., “Site-specific targeting of enterovirus capsid by functionalized monodispersed gold nanoclusters”, PNAS 111, 1277 (2014)

Here’s the figure mentioned in the news release,

Figure: 2D hexagonal sheet-like and 3D capsid structures based on atomically precise gold nanoclusters as guided by hydrogen bonding between the ligands. The inset in the top left corner shows the atomic structure of one gold nanocluster.

Here’s a link to and a citation for the paper,

Template-Free Supracolloidal Self-Assembly of Atomically Precise Gold Nanoclusters: From 2D Colloidal Crystals to Spherical Capsids by Dr. Nonappa, Dr. Tanja Lahtinen, M. Sc. Johannes. S. Haataja, Dr. Tiia-Riikka Tero, Prof. Hannu Häkkinen, and Prof. Olli Ikkala. Angewandte Chemie International Edition Volume 55, Issue 52, pages 16035–16038, December 23, 2016 Version of Record online: 23 NOV 2016 DOI: 10.1002/anie.201609036

© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

‘Smart’ fabric that’s bony

Researchers at Australia’s University of New South of Wales (UNSW) have devised a means of ‘weaving’ a material that mimics the bone tissue, periosteum according to a Jan. 11, 2017 news item on ScienceDaily,

For the first time, UNSW [University of New South Wales] biomedical engineers have woven a ‘smart’ fabric that mimics the sophisticated and complex properties of one nature’s ingenious materials, the bone tissue periosteum.

Having achieved proof of concept, the researchers are now ready to produce fabric prototypes for a range of advanced functional materials that could transform the medical, safety and transport sectors. Patents for the innovation are pending in Australia, the United States and Europe.

Potential future applications range from protective suits that stiffen under high impact for skiers, racing-car drivers and astronauts, through to ‘intelligent’ compression bandages for deep-vein thrombosis that respond to the wearer’s movement and safer steel-belt radial tyres.

A Jan. 11, 2017 UNSW press release on EurekAlert, which originated the news item, expands on the theme,

Many animal and plant tissues exhibit ‘smart’ and adaptive properties. One such material is the periosteum, a soft tissue sleeve that envelops most bony surfaces in the body. The complex arrangement of collagen, elastin and other structural proteins gives periosteum amazing resilience and provides bones with added strength under high impact loads.

Until now, a lack of scalable ‘bottom-up’ approaches by researchers has stymied their ability to use smart tissues to create advanced functional materials.

UNSW’s Paul Trainor Chair of Biomedical Engineering, Professor Melissa Knothe Tate, said her team had for the first time mapped the complex tissue architectures of the periosteum, visualised them in 3D on a computer, scaled up the key components and produced prototypes using weaving loom technology.

“The result is a series of textile swatch prototypes that mimic periosteum’s smart stress-strain properties. We have also demonstrated the feasibility of using this technique to test other fibres to produce a whole range of new textiles,” Professor Knothe Tate said.

In order to understand the functional capacity of the periosteum, the team used an incredibly high fidelity imaging system to investigate and map its architecture.

“We then tested the feasibility of rendering periosteum’s natural tissue weaves using computer-aided design software,” Professor Knothe Tate said.

The computer modelling allowed the researchers to scale up nature’s architectural patterns to weave periosteum-inspired, multidimensional fabrics using a state-of-the-art computer-controlled jacquard loom. The loom is known as the original rudimentary computer, first unveiled in 1801.

“The challenge with using collagen and elastin is their fibres, that are too small to fit into the loom. So we used elastic material that mimics elastin and silk that mimics collagen,” Professor Knothe Tate said.

In a first test of the scaled-up tissue weaving concept, a series of textile swatch prototypes were woven, using specific combinations of collagen and elastin in a twill pattern designed to mirror periosteum’s weave. Mechanical testing of the swatches showed they exhibited similar properties found in periosteum’s natural collagen and elastin weave.

First author and biomedical engineering PhD candidate, Joanna Ng, said the technique had significant implications for the development of next-generation advanced materials and mechanically functional textiles.

While the materials produced by the jacquard loom have potential manufacturing applications – one tyremaker believes a titanium weave could spawn a new generation of thinner, stronger and safer steel-belt radials – the UNSW team is ultimately focused on the machine’s human potential.

“Our longer term goal is to weave biological tissues – essentially human body parts – in the lab to replace and repair our failing joints that reflect the biology, architecture and mechanical properties of the periosteum,” Ms Ng said.

An NHMRC development grant received in November [2016] will allow the team to take its research to the next phase. The researchers will work with the Cleveland Clinic and the University of Sydney’s Professor Tony Weiss to develop and commercialise prototype bone implants for pre-clinical research, using the ‘smart’ technology, within three years.

In searching for more information about this work, I found a Winter 2015 article (PDF; pp. 8-11) by Amy Coopes and Steve Offner for UNSW Magazine about Knothe Tate and her work (Note: In Australia, winter would be what we in the Northern Hemisphere consider summer),

Tucked away in a small room in UNSW’s Graduate School of Biomedical Engineering sits a 19th century–era weaver’s wooden loom. Operated by punch cards and hooks, the machine was the first rudimentary computer when it was unveiled in 1801. While on the surface it looks like a standard Jacquard loom, it has been enhanced with motherboards integrated into each of the loom’s five hook modules and connected to a computer. This state-of-the-art technology means complex algorithms control each of the 5,000 feed-in fibres with incredible precision.

That capacity means the loom can weave with an extraordinary variety of substances, from glass and titanium to rayon and silk, a development that has attracted industry attention around the world.

The interest lies in the natural advantage woven materials have over other manufactured substances. Instead of manipulating material to create new shades or hues as in traditional weaving, the fabrics’ mechanical properties can be modulated, to be stiff at one end, for example, and more flexible at the other.

“Instead of a pattern of colours we get a pattern of mechanical properties,” says Melissa Knothe Tate, UNSW’s Paul Trainor Chair of Biomedical Engineering. “Think of a rope; it’s uniquely good in tension and in bending. Weaving is naturally strong in that way.”


The interface of mechanics and physiology is the focus of Knothe Tate’s work. In March [2015], she travelled to the United States to present another aspect of her work at a meeting of the international Orthopedic Research Society in Las Vegas. That project – which has been dubbed “Google Maps for the body” – explores the interaction between cells and their environment in osteoporosis and other degenerative musculoskeletal conditions such as osteoarthritis.

Using previously top-secret semiconductor technology developed by optics giant Zeiss, and the same approach used by Google Maps to locate users with pinpoint accuracy, Knothe Tate and her team have created “zoomable” anatomical maps from the scale of a human joint down to a single cell.

She has also spearheaded a groundbreaking partnership that includes the Cleveland Clinic, and Brown and Stanford universities to help crunch terabytes of data gathered from human hip studies – all processed with the Google technology. Analysis that once took 25 years can now be done in a matter of weeks, bringing researchers ever closer to a set of laws that govern biological behaviour. [p. 9]

I gather she was recruited from the US to work at the University of New South Wales and this article was to highlight why they recruited her and to promote the university’s biomedical engineering department, which she chairs.

Getting back to 2017, here’s a link to and citation for the paper,

Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials by Joanna L. Ng, Lillian E. Knothe, Renee M. Whan, Ulf Knothe & Melissa L. Knothe Tate. Scientific Reports 7, Article number: 40396 (2017) doi:10.1038/srep40396 Published online: 11 January 2017

This paper is open access.

One final comment, that’s a lot of people (three out of five) with the last name Knothe in the author’s list for the paper.

Investigating nanoparticles and their environmental impact for industry?

It seems the Center for the Environmental Implications of Nanotechnology (CEINT) at Duke University (North Carolina, US) is making an adjustment to its focus and opening the door to industry, as well as, government research. It has for some years (my first post about the CEINT at Duke University is an Aug. 15, 2011 post about its mesocosms) been focused on examining the impact of nanoparticles (also called nanomaterials) on plant life and aquatic systems. This Jan. 9, 2017 US National Science Foundation (NSF) news release (h/t Jan. 9, 2017 Nanotechnology Now news item) provides a general description of the work,

We can’t see them, but nanomaterials, both natural and manmade, are literally everywhere, from our personal care products to our building materials–we’re even eating and drinking them.

At the NSF-funded Center for Environmental Implications of Nanotechnology (CEINT), headquartered at Duke University, scientists and engineers are researching how some of these nanoscale materials affect living things. One of CEINT’s main goals is to develop tools that can help assess possible risks to human health and the environment. A key aspect of this research happens in mesocosms, which are outdoor experiments that simulate the natural environment – in this case, wetlands. These simulated wetlands in Duke Forest serve as a testbed for exploring how nanomaterials move through an ecosystem and impact living things.

CEINT is a collaborative effort bringing together researchers from Duke, Carnegie Mellon University, Howard University, Virginia Tech, University of Kentucky, Stanford University, and Baylor University. CEINT academic collaborations include on-going activities coordinated with faculty at Clemson, North Carolina State and North Carolina Central universities, with researchers at the National Institute of Standards and Technology and the Environmental Protection Agency labs, and with key international partners.

The research in this episode was supported by NSF award #1266252, Center for the Environmental Implications of NanoTechnology.

The mention of industry is in this video by O’Brien and Kellan, which describes CEINT’s latest work ,

Somewhat similar in approach although without a direction reference to industry, Canada’s Experimental Lakes Area (ELA) is being used as a test site for silver nanoparticles. Here’s more from the Distilling Science at the Experimental Lakes Area: Nanosilver project page,

Water researchers are interested in nanotechnology, and one of its most commonplace applications: nanosilver. Today these tiny particles with anti-microbial properties are being used in a wide range of consumer products. The problem with nanoparticles is that we don’t fully understand what happens when they are released into the environment.

The research at the IISD-ELA [International Institute for Sustainable Development Experimental Lakes Area] will look at the impacts of nanosilver on ecosystems. What happens when it gets into the food chain? And how does it affect plants and animals?

Here’s a video describing the Nanosilver project at the ELA,

You may have noticed a certain tone to the video and it is due to some political shenanigans, which are described in this Aug. 8, 2016 article by Bartley Kives for the Canadian Broadcasting Corporation’s (CBC) online news.

Bionic pancreas tested at home

This news about a bionic pancreas must be exciting for diabetics as it would eliminate the need for constant blood sugar testing throughout the day. From a Dec. 19, 2016 Massachusetts General Hospital news release (also on EurekAlert), Note: Links have been removed,

The bionic pancreas system developed by Boston University (BU) investigators proved better than either conventional or sensor-augmented insulin pump therapy at managing blood sugar levels in patients with type 1 diabetes living at home, with no restrictions, over 11 days. The report of a clinical trial led by a Massachusetts General Hospital (MGH) physician is receiving advance online publication in The Lancet.

“For study participants living at home without limitations on their activity and diet, the bionic pancreas successfully reduced average blood glucose, while at the same time decreasing the risk of hypoglycemia,” says Steven Russell, MD, PhD, of the MGH Diabetes Unit. “This system requires no information other than the patient’s body weight to start, so it will require much less time and effort by health care providers to initiate treatment. And since no carbohydrate counting is required, it significantly reduces the burden on patients associated with diabetes management.”

Developed by Edward Damiano, PhD, and Firas El-Khatib, PhD, of the BU Department of Biomedical Engineering, the bionic pancreas controls patients’ blood sugar with both insulin and glucagon, a hormone that increases glucose levels. After a 2010 clinical trial confirmed that the original version of the device could maintain near-normal blood sugar levels for more than 24 hours in adult patients, two follow-up trials – reported in a 2014 New England Journal of Medicine paper – showed that an updated version of the system successfully controlled blood sugar levels in adults and adolescents for five days.  Another follow-up trial published in The Lancet Diabetes and Endocrinology in 2016  showed it could do the same for children as young as 6 years of age.

While minimal restrictions were placed on participants in the 2014 trials, participants in both spent nights in controlled settings and were accompanied at all times by either a nurse for the adult trial or remained in a diabetes camp for the adolescent and pre-adolescent trials. Participants in the current trial had no such restrictions placed upon them, as they were able to pursue normal activities at home or at work with no imposed limitations on diet or exercise. Patients needed to live within a 30-minute drive of one of the trial sites – MGH, the University of Massachusetts Medical School, Stanford University, and the University of North Carolina at Chapel Hill – and needed to designate a contact person who lived with them and could be contacted by study staff, if necessary.

The bionic pancreas system – the same as that used in the 2014 studies – consisted of a smartphone (iPhone 4S) that could wirelessly communicate with two pumps delivering either insulin or glucagon. Every five minutes the smartphone received a reading from an attached continuous glucose monitor, which was used to calculate and administer a dose of either insulin or glucagon. The algorighms controlling the system were updated for the current trial to better respond to blood sugar variations.

While the device allows participants to enter information about each upcoming meal into a smartphone app, allowing the system to deliver an anticipatory insulin dose, such entries were optional in the current trial. If participants’ blood sugar dropped to dangerous levels or if the monitor or one of the pumps was disconnected for more than 15 minutes, the system would alerted study staff, allowing them to check with the participants or their contact persons.

Study participants were adults who had been diagnosed with type 1 diabetes for a year or more and had used an insulin pump to manage their care for at least six months. Each of 39 participants that finished the study completed two 11-day study periods, one using the bionic pancreas and one using their usual insulin pump and any continous glucose monitor they had been using. In addition to the automated monitoring of glucose levels and administered doses of insulin or glucagon, participants completed daily surveys regarding any episodes of symptomatic hypoglycemia, carbohydrates consumed to treat those episodes, and any episodes of nausea.

On days when participants were on the bionic pancreas, their average blood glucose levels were significantly lower – 141 mg/dl versus 162 mg/dl – than when on their standard treatment. Blood sugar levels were at levels indicating hypoglycemia (less than 60 mg/dl) for 0.6 percent of the time when participants were on the bionic pancreas, versus 1.9 percent of the time on standard treatment. Participants reported fewer episodes of symptomatic hypoglycemia while on the bionic pancreas, and no episodes of severe hypoglycemia were associated with the system.

The system performed even better during the overnight period, when the risk of hypoglycemia is particularly concerning. “Patients with type 1 diabetes worry about developing hypoglycemia when they are sleeping and tend to let their blood sugar run high at night to reduce that risk,” explains Russell, an assistant professor of Medicine at Harvard Medical School. “Our study showed that the bionic pancreas reduced the risk of overnight hypoglycemia to almost nothing without raising the average glucose level. In fact the improvement in average overnight glucose was greater than the improvement in average glucose over the full 24-hour period.”

Damiano, whose work on this project is inspired by his own 17-year-old son’s type 1 diabetes, adds, “The availability of the bionic pancreas would dramatically change the life of people with diabetes by reducing average glucose levels – thereby reducing the risk of diabetes complications – reducing the risk of hypoglycemia, which is a constant fear of patients and their families, and reducing the emotional burden of managing type 1 diabetes.” A co-author of the Lancet report, Damiano is a professor of Biomedical Engineering at Boston University.

The BU patents covering the bionic pancreas have been licensed to Beta Bionics, a startup company co-founded by Damiano and El-Khatib. The company’s latest version of the bionic pancreas, called the iLet, integrates all components into a single unit, which will be tested in future clinical trials. People interested in participating in upcoming trials may contact Russell’s team at the MGH Diabetes Research Center in care of Llazar Cuko (LCUKO@mgh.harvard.edu ).

Here`s a link to and a citation for the paper,

Home use of a bihormonal bionic pancreas versus insulin pump therapy in adults with type 1 diabetes: a multicentre randomised crossover trial by Firas H El-Khatib, Courtney Balliro, Mallory A Hillard, Kendra L Magyar, Laya Ekhlaspour, Manasi Sinha, Debbie Mondesir, Aryan Esmaeili, Celia Hartigan, Michael J Thompson, Samir Malkani, J Paul Lock, David M Harlan, Paula Clinton, Eliana Frank, Darrell M Wilson, Daniel DeSalvo, Lisa Norlander, Trang Ly, Bruce A Buckingham, Jamie Diner, Milana Dezube, Laura A Young, April Goley, M Sue Kirkman, John B Buse, Hui Zheng, Rajendranath R Selagamsetty, Edward R Damiano, Steven J Russell. Lancet DOI: http://dx.doi.org/10.1016/S0140-6736(16)32567-3  Published: 19 December 2016

This paper is behind a paywall.

You can find out more about Beta Bionics and iLet here.

Using acoustic waves to move fluids at the nanoscale

A Nov. 14, 2016 news item on ScienceDaily describes research that could lead to applications useful for ‘lab-on-a-chip’ operations,

A team of mechanical engineers at the University of California San Diego [UCSD] has successfully used acoustic waves to move fluids through small channels at the nanoscale. The breakthrough is a first step toward the manufacturing of small, portable devices that could be used for drug discovery and microrobotics applications. The devices could be integrated in a lab on a chip to sort cells, move liquids, manipulate particles and sense other biological components. For example, it could be used to filter a wide range of particles, such as bacteria, to conduct rapid diagnosis.

A Nov. 14, 2016 UCSD news release (also on EurrekAlert), which originated the news item, provides more information,

The researchers detail their findings in the Nov. 14 issue of Advanced Functional Materials. This is the first time that surface acoustic waves have been used at the nanoscale.

The field of nanofluidics has long struggled with moving fluids within channels that are 1000 times smaller than the width of a hair, said James Friend, a professor and materials science expert at the Jacobs School of Engineering at UC San Diego. Current methods require bulky and expensive equipment as well as high temperatures. Moving fluid out of a channel that’s just a few nanometers high requires pressures of 1 megaPascal, or the equivalent of 10 atmospheres.

Researchers led by Friend had tried to use acoustic waves to move the fluids along at the nano scale for several years. They also wanted to do this with a device that could be manufactured at room temperature.

After a year of experimenting, post-doctoral researcher Morteza Miansari, now at Stanford, was able to build a device made of lithium niobate with nanoscale channels where fluids can be moved by surface acoustic waves. This was made possible by a new method Miansari developed to bond the material to itself at room temperature.  The fabrication method can be easily scaled up, which would lower manufacturing costs. Building one device would cost $1000 but building 100,000 would drive the price down to $1 each.

The device is compatible with biological materials, cells and molecules.

Researchers used acoustic waves with a frequency of 20 megaHertz to manipulate fluids, droplets and particles in nanoslits that are 50 to 250 nanometers tall. To fill the channels, researchers applied the acoustic waves in the same direction as the fluid moving into the channels. To drain the channels, the sound waves were applied in the opposite direction.

By changing the height of the channels, the device could be used to filter a wide range of particles, down to large biomolecules such as siRNA, which would not fit in the slits. Essentially, the acoustic waves would drive fluids containing the particles into these channels. But while the fluid would go through, the particles would be left behind and form a dry mass. This could be used for rapid diagnosis in the field.

Here’s a link to and a citation for the paper,

Acoustic Nanofluidics via Room-Temperature Lithium Niobate Bonding: A Platform for Actuation and Manipulation of Nanoconfined Fluids and Particles by Morteza Miansari and James R. Friend. Advanced Functional Materials DOI: 10.1002/adfm.201602425 Version of Record online: 20 SEP 2016
© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

They do have an animation sequence illustrating the work but it could be considered suggestive and is, weirdly, silent,

 

 

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Innovation and two Canadian universities

I have two news bits and both concern the Canadian universities, the University of British Columbia (UBC) and the University of Toronto (UofT).

Creative Destruction Lab – West

First, the Creative Destruction Lab, a technology commercialization effort based at UofT’s Rotman School of Management, is opening an office in the west according to a Sept. 28, 2016 UBC media release (received via email; Note: Links have been removed; this is a long media release which interestingly does not mention Joseph Schumpeter the man who developed the economic theory which he called: creative destruction),

The UBC Sauder School of Business is launching the Western Canadian version of the Creative Destruction Lab, a successful seed-stage program based at UofT’s Rotman School of Management, to help high-technology ventures driven by university research maximize their commercial impact and benefit to society.

“Creative Destruction Lab – West will provide a much-needed support system to ensure innovations formulated on British Columbia campuses can access the funding they need to scale up and grow in-province,” said Robert Helsley, Dean of the UBC Sauder School of Business. “The success our partners at Rotman have had in helping commercialize the scientific breakthroughs of Canadian talent is remarkable and is exactly what we plan to replicate at UBC Sauder.”

Between 2012 and 2016, companies from CDL’s first four years generated over $800 million in equity value. It has supported a long line of emerging startups, including computer-human interface company Thalmic Labs, which announced nearly USD $120 million in funding on September 19, one of the largest Series B financings in Canadian history.

Focusing on massively scalable high-tech startups, CDL-West will provide coaching from world-leading entrepreneurs, support from dedicated business and science faculty, and access to venture capital. While some of the ventures will originate at UBC, CDL-West will also serve the entire province and extended western region by welcoming ventures from other universities. The program will closely align with existing entrepreneurship programs across UBC, including, e@UBC and HATCH, and actively work with the BC Tech Association [also known as the BC Technology Industry Association] and other partners to offer a critical next step in the venture creation process.

“We created a model for tech venture creation that keeps startups focused on their essential business challenges and dedicated to solving them with world-class support,” said CDL Founder Ajay Agrawal, a professor at the Rotman School of Management and UBC PhD alumnus.

“By partnering with UBC Sauder, we will magnify the impact of CDL by drawing in ventures from one of the country’s other leading research universities and B.C.’s burgeoning startup scene to further build the country’s tech sector and the opportunities for job creation it provides,” said CDL Director, Rachel Harris.

CDL uses a goal-setting model to push ventures along a path toward success. Over nine months, a collective of leading entrepreneurs with experience building and scaling technology companies – called the G7 – sets targets for ventures to hit every eight weeks, with the goal of maximizing their equity-value. Along the way ventures turn to business and technology experts for strategic guidance on how to reach goals, and draw on dedicated UBC Sauder students who apply state-of the-art business skills to help companies decide which market to enter first and how.

Ventures that fail to achieve milestones – approximately 50 per cent in past cohorts – are cut from the process. Those that reach their objectives and graduate from the program attract investment from the G7, as well as other leading venture-capital firms.

Currently being assembled, the CDL-West G7 will be comprised of entrepreneurial luminaries, including Jeff Mallett, the founding President, COO and Director of Yahoo! Inc. from 1995-2002 – a company he led to $4 billion in revenues and grew from a startup to a publicly traded company whose value reached $135 billion. He is now Managing Director of Iconica Partners and Managing Partner of Mallett Sports & Entertainment, with ventures including the San Francisco Giants, AT&T Park and Mission Rock Development, Comcast Bay Area Sports Network, the San Jose Giants, Major League Soccer, Vancouver Whitecaps FC, and a variety of other sports and online ventures.

Already bearing fruit, the Creative Destruction Lab partnership will see several UBC ventures accepted into a Machine Learning Specialist Track run by Rotman’s CDL this fall. This track is designed to create a support network for enterprises focused on artificial intelligence, a research strength at UofT and Canada more generally, which has traditionally migrated to the United States for funding and commercialization. In its second year, CDL-West will launch its own specialist track in an area of strength at UBC that will draw eastern ventures west.

“This new partnership creates the kind of high impact innovation network the Government of Canada wants to encourage,” said Brandon Lee, Canada’s Consul General in San Francisco, who works to connect Canadian innovation to customers and growth capital opportunities in Silicon Valley. “By collaborating across our universities to enhance our capacity to turn the scientific discoveries into businesses in Canada, we can further advance our nation’s global competitiveness in the knowledge-based industries.”

The Creative Destruction Lab is guided by an Advisory Board, co-chaired by Vancouver-based Haig Farris, a pioneer of the Canadian venture capitalist industry, and Bill Graham, Chancellor of Trinity College at UofT and former Canadian cabinet minister.

“By partnering with Rotman, UBC Sauder will be able to scale up its support for high-tech ventures extremely quickly and with tremendous impact,” said Paul Cubbon, Leader of CDL-West and a faculty member at UBC Sauder. “CDL-West will act as a turbo booster for ventures with great ideas, but which lack the strategic roadmap and funding to make them a reality.”

CDL-West launched its competitive application process for the first round of ventures that will begin in January 2017. Interested ventures are encouraged to submit applications via the CDL website at: www.creativedestructionlab.com

Background

UBC Technology ventures represented at media availability

Awake Labs is a wearable technology startup whose products measure and track anxiety in people with Autism Spectrum Disorder to better understand behaviour. Their first device, Reveal, monitors a wearer’s heart-rate, body temperature and sweat levels using high-tech sensors to provide insight into care and promote long term independence.

Acuva Technologies is a Vancouver-based clean technology venture focused on commercializing breakthrough UltraViolet Light Emitting Diode technology for water purification systems. Initially focused on point of use systems for boats, RVs and off grid homes in North American market, where they already have early sales, the company’s goal is to enable water purification in households in developing countries by 2018 and deploy large scale systems by 2021.

Other members of the CDL-West G7 include:

Boris Wertz: One of the top tech early-stage investors in North America and the founding partner of Version One, Wertz is also a board partner with Andreessen Horowitz. Before becoming an investor, Wertz was the Chief Operating Officer of AbeBooks.com, which sold to Amazon in 2008. He was responsible for marketing, business development, product, customer service and international operations. His deep operational experience helps him guide other entrepreneurs to start, build and scale companies.

Lisa Shields: Founder of Hyperwallet Systems Inc., Shields guided Hyperwallet from a technology startup to the leading international payments processor for business to consumer mass payouts. Prior to founding Hyperwallet, Lisa managed payments acceptance and risk management technology teams for high-volume online merchants. She was the founding director of the Wireless Innovation Society of British Columbia and is driven by the social and economic imperatives that shape global payment technologies.

Jeff Booth: Co-founder, President and CEO of Build Direct, a rapidly growing online supplier of home improvement products. Through custom and proprietary web analytics and forecasting tools, BuildDirect is reinventing and redefining how consumers can receive the best prices. BuildDirect has 12 warehouse locations across North America and is headquartered in Vancouver, BC. In 2015, Booth was awarded the BC Technology ‘Person of the Year’ Award by the BC Technology Industry Association.

Education:

CDL-west will provide a transformational experience for MBA and senior undergraduate students at UBC Sauder who will act as venture advisors. Replacing traditional classes, students learn by doing during the process of rapid equity-value creation.

Supporting venture development at UBC:

CDL-west will work closely with venture creation programs across UBC to complete the continuum of support aimed at maximizing venture value and investment. It will draw in ventures that are being or have been supported and developed in programs that span campus, including:

University Industry Liaison Office which works to enable research and innovation partnerships with industry, entrepreneurs, government and non-profit organizations.

e@UBC which provides a combination of mentorship, education, venture creation, and seed funding to support UBC students, alumni, faculty and staff.

HATCH, a UBC technology incubator which leverages the expertise of the UBC Sauder School of Business and entrepreneurship@UBC and a seasoned team of domain-specific experts to provide real-world, hands-on guidance in moving from innovative concept to successful venture.

Coast Capital Savings Innovation Hub, a program base at the UBC Sauder Centre for Social Innovation & Impact Investing focused on developing ventures with the goal of creating positive social and environmental impact.

About the Creative Destruction Lab in Toronto:

The Creative Destruction Lab leverages the Rotman School’s leading faculty and industry network as well as its location in the heart of Canada’s business capital to accelerate massively scalable, technology-based ventures that have the potential to transform our social, industrial, and economic landscape. The Lab has had a material impact on many nascent startups, including Deep Genomics, Greenlid, Atomwise, Bridgit, Kepler Communications, Nymi, NVBots, OTI Lumionics, PUSH, Thalmic Labs, Vertical.ai, Revlo, Validere, Growsumo, and VoteCompass, among others. For more information, visit www.creativedestructionlab.com

About the UBC Sauder School of Business

The UBC Sauder School of Business is committed to developing transformational and responsible business leaders for British Columbia and the world. Located in Vancouver, Canada’s gateway to the Pacific Rim, the school is distinguished for its long history of partnership and engagement in Asia, the excellence of its graduates, and the impact of its research which ranks in the top 20 globally. For more information, visit www.sauder.ubc.ca

About the Rotman School of Management

The Rotman School of Management is located in the heart of Canada’s commercial and cultural capital and is part of the University of Toronto, one of the world’s top 20 research universities. The Rotman School fosters a new way to think that enables graduates to tackle today’s global business and societal challenges. For more information, visit www.rotman.utoronto.ca.

It’s good to see a couple of successful (according to the news release) local entrepreneurs on the board although I’m somewhat puzzled by Mallett’s presence since, if memory serves, Yahoo! was not doing that well when he left in 2002. The company was an early success but utterly dwarfed by Google at some point in the early 2000s and these days, its stock (both financial and social) has continued to drift downwards. As for Mallett’s current successes, there is no mention of them.

Reuters Top 100 of the world’s most innovative universities

After reading or skimming through the CDL-West news you might think that the University of Toronto ranked higher than UBC on the Reuters list of the world’s most innovative universities. Before breaking the news about the Canadian rankings, here’s more about the list from a Sept, 28, 2016 Reuters news release (receive via email),

Stanford University, the Massachusetts Institute of Technology and Harvard University top the second annual Reuters Top 100 ranking of the world’s most innovative universities. The Reuters Top 100 ranking aims to identify the institutions doing the most to advance science, invent new technologies and help drive the global economy. Unlike other rankings that often rely entirely or in part on subjective surveys, the ranking uses proprietary data and analysis tools from the Intellectual Property & Science division of Thomson Reuters to examine a series of patent and research-related metrics, and get to the essence of what it means to be truly innovative.

In the fast-changing world of science and technology, if you’re not innovating, you’re falling behind. That’s one of the key findings of this year’s Reuters 100. The 2016 results show that big breakthroughs – even just one highly influential paper or patent – can drive a university way up the list, but when that discovery fades into the past, so does its ranking. Consistency is key, with truly innovative institutions putting out groundbreaking work year after year.

Stanford held fast to its first place ranking by consistently producing new patents and papers that influence researchers elsewhere in academia and in private industry. Researchers at the Massachusetts Institute of Technology (ranked #2) were behind some of the most important innovations of the past century, including the development of digital computers and the completion of the Human Genome Project. Harvard University (ranked #3), is the oldest institution of higher education in the United States, and has produced 47 Nobel laureates over the course of its 380-year history.

Some universities saw significant movement up the list, including, most notably, the University of Chicago, which jumped from #71 last year to #47 in 2016. Other list-climbers include the Netherlands’ Delft University of Technology (#73 to #44) and South Korea’s Sungkyunkwan University (#66 to #46).

The United States continues to dominate the list, with 46 universities in the top 100; Japan is once again the second best performing country, with nine universities. France and South Korea are tied in third, each with eight. Germany has seven ranked universities; the United Kingdom has five; Switzerland, Belgium and Israel have three; Denmark, China and Canada have two; and the Netherlands and Singapore each have one.

You can find the rankings here (scroll down about 75% of the way) and for the impatient, the University of British Columbia ranked 50th and the University of Toronto 57th.

The biggest surprise for me was that China, like Canada, had two universities on the list. I imagine that will change as China continues its quest for science and innovation dominance. Given how they tout their innovation prowess, I had one other surprise, the University of Waterloo’s absence.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.