Tag Archives: Stanford University

‘Smart’ fabric that’s bony

Researchers at Australia’s University of New South of Wales (UNSW) have devised a means of ‘weaving’ a material that mimics the bone tissue, periosteum according to a Jan. 11, 2017 news item on ScienceDaily,

For the first time, UNSW [University of New South Wales] biomedical engineers have woven a ‘smart’ fabric that mimics the sophisticated and complex properties of one nature’s ingenious materials, the bone tissue periosteum.

Having achieved proof of concept, the researchers are now ready to produce fabric prototypes for a range of advanced functional materials that could transform the medical, safety and transport sectors. Patents for the innovation are pending in Australia, the United States and Europe.

Potential future applications range from protective suits that stiffen under high impact for skiers, racing-car drivers and astronauts, through to ‘intelligent’ compression bandages for deep-vein thrombosis that respond to the wearer’s movement and safer steel-belt radial tyres.

A Jan. 11, 2017 UNSW press release on EurekAlert, which originated the news item, expands on the theme,

Many animal and plant tissues exhibit ‘smart’ and adaptive properties. One such material is the periosteum, a soft tissue sleeve that envelops most bony surfaces in the body. The complex arrangement of collagen, elastin and other structural proteins gives periosteum amazing resilience and provides bones with added strength under high impact loads.

Until now, a lack of scalable ‘bottom-up’ approaches by researchers has stymied their ability to use smart tissues to create advanced functional materials.

UNSW’s Paul Trainor Chair of Biomedical Engineering, Professor Melissa Knothe Tate, said her team had for the first time mapped the complex tissue architectures of the periosteum, visualised them in 3D on a computer, scaled up the key components and produced prototypes using weaving loom technology.

“The result is a series of textile swatch prototypes that mimic periosteum’s smart stress-strain properties. We have also demonstrated the feasibility of using this technique to test other fibres to produce a whole range of new textiles,” Professor Knothe Tate said.

In order to understand the functional capacity of the periosteum, the team used an incredibly high fidelity imaging system to investigate and map its architecture.

“We then tested the feasibility of rendering periosteum’s natural tissue weaves using computer-aided design software,” Professor Knothe Tate said.

The computer modelling allowed the researchers to scale up nature’s architectural patterns to weave periosteum-inspired, multidimensional fabrics using a state-of-the-art computer-controlled jacquard loom. The loom is known as the original rudimentary computer, first unveiled in 1801.

“The challenge with using collagen and elastin is their fibres, that are too small to fit into the loom. So we used elastic material that mimics elastin and silk that mimics collagen,” Professor Knothe Tate said.

In a first test of the scaled-up tissue weaving concept, a series of textile swatch prototypes were woven, using specific combinations of collagen and elastin in a twill pattern designed to mirror periosteum’s weave. Mechanical testing of the swatches showed they exhibited similar properties found in periosteum’s natural collagen and elastin weave.

First author and biomedical engineering PhD candidate, Joanna Ng, said the technique had significant implications for the development of next-generation advanced materials and mechanically functional textiles.

While the materials produced by the jacquard loom have potential manufacturing applications – one tyremaker believes a titanium weave could spawn a new generation of thinner, stronger and safer steel-belt radials – the UNSW team is ultimately focused on the machine’s human potential.

“Our longer term goal is to weave biological tissues – essentially human body parts – in the lab to replace and repair our failing joints that reflect the biology, architecture and mechanical properties of the periosteum,” Ms Ng said.

An NHMRC development grant received in November [2016] will allow the team to take its research to the next phase. The researchers will work with the Cleveland Clinic and the University of Sydney’s Professor Tony Weiss to develop and commercialise prototype bone implants for pre-clinical research, using the ‘smart’ technology, within three years.

In searching for more information about this work, I found a Winter 2015 article (PDF; pp. 8-11) by Amy Coopes and Steve Offner for UNSW Magazine about Knothe Tate and her work (Note: In Australia, winter would be what we in the Northern Hemisphere consider summer),

Tucked away in a small room in UNSW’s Graduate School of Biomedical Engineering sits a 19th century–era weaver’s wooden loom. Operated by punch cards and hooks, the machine was the first rudimentary computer when it was unveiled in 1801. While on the surface it looks like a standard Jacquard loom, it has been enhanced with motherboards integrated into each of the loom’s five hook modules and connected to a computer. This state-of-the-art technology means complex algorithms control each of the 5,000 feed-in fibres with incredible precision.

That capacity means the loom can weave with an extraordinary variety of substances, from glass and titanium to rayon and silk, a development that has attracted industry attention around the world.

The interest lies in the natural advantage woven materials have over other manufactured substances. Instead of manipulating material to create new shades or hues as in traditional weaving, the fabrics’ mechanical properties can be modulated, to be stiff at one end, for example, and more flexible at the other.

“Instead of a pattern of colours we get a pattern of mechanical properties,” says Melissa Knothe Tate, UNSW’s Paul Trainor Chair of Biomedical Engineering. “Think of a rope; it’s uniquely good in tension and in bending. Weaving is naturally strong in that way.”


The interface of mechanics and physiology is the focus of Knothe Tate’s work. In March [2015], she travelled to the United States to present another aspect of her work at a meeting of the international Orthopedic Research Society in Las Vegas. That project – which has been dubbed “Google Maps for the body” – explores the interaction between cells and their environment in osteoporosis and other degenerative musculoskeletal conditions such as osteoarthritis.

Using previously top-secret semiconductor technology developed by optics giant Zeiss, and the same approach used by Google Maps to locate users with pinpoint accuracy, Knothe Tate and her team have created “zoomable” anatomical maps from the scale of a human joint down to a single cell.

She has also spearheaded a groundbreaking partnership that includes the Cleveland Clinic, and Brown and Stanford universities to help crunch terabytes of data gathered from human hip studies – all processed with the Google technology. Analysis that once took 25 years can now be done in a matter of weeks, bringing researchers ever closer to a set of laws that govern biological behaviour. [p. 9]

I gather she was recruited from the US to work at the University of New South Wales and this article was to highlight why they recruited her and to promote the university’s biomedical engineering department, which she chairs.

Getting back to 2017, here’s a link to and citation for the paper,

Scale-up of nature’s tissue weaving algorithms to engineer advanced functional materials by Joanna L. Ng, Lillian E. Knothe, Renee M. Whan, Ulf Knothe & Melissa L. Knothe Tate. Scientific Reports 7, Article number: 40396 (2017) doi:10.1038/srep40396 Published online: 11 January 2017

This paper is open access.

One final comment, that’s a lot of people (three out of five) with the last name Knothe in the author’s list for the paper.

Investigating nanoparticles and their environmental impact for industry?

It seems the Center for the Environmental Implications of Nanotechnology (CEINT) at Duke University (North Carolina, US) is making an adjustment to its focus and opening the door to industry, as well as, government research. It has for some years (my first post about the CEINT at Duke University is an Aug. 15, 2011 post about its mesocosms) been focused on examining the impact of nanoparticles (also called nanomaterials) on plant life and aquatic systems. This Jan. 9, 2017 US National Science Foundation (NSF) news release (h/t Jan. 9, 2017 Nanotechnology Now news item) provides a general description of the work,

We can’t see them, but nanomaterials, both natural and manmade, are literally everywhere, from our personal care products to our building materials–we’re even eating and drinking them.

At the NSF-funded Center for Environmental Implications of Nanotechnology (CEINT), headquartered at Duke University, scientists and engineers are researching how some of these nanoscale materials affect living things. One of CEINT’s main goals is to develop tools that can help assess possible risks to human health and the environment. A key aspect of this research happens in mesocosms, which are outdoor experiments that simulate the natural environment – in this case, wetlands. These simulated wetlands in Duke Forest serve as a testbed for exploring how nanomaterials move through an ecosystem and impact living things.

CEINT is a collaborative effort bringing together researchers from Duke, Carnegie Mellon University, Howard University, Virginia Tech, University of Kentucky, Stanford University, and Baylor University. CEINT academic collaborations include on-going activities coordinated with faculty at Clemson, North Carolina State and North Carolina Central universities, with researchers at the National Institute of Standards and Technology and the Environmental Protection Agency labs, and with key international partners.

The research in this episode was supported by NSF award #1266252, Center for the Environmental Implications of NanoTechnology.

The mention of industry is in this video by O’Brien and Kellan, which describes CEINT’s latest work ,

Somewhat similar in approach although without a direction reference to industry, Canada’s Experimental Lakes Area (ELA) is being used as a test site for silver nanoparticles. Here’s more from the Distilling Science at the Experimental Lakes Area: Nanosilver project page,

Water researchers are interested in nanotechnology, and one of its most commonplace applications: nanosilver. Today these tiny particles with anti-microbial properties are being used in a wide range of consumer products. The problem with nanoparticles is that we don’t fully understand what happens when they are released into the environment.

The research at the IISD-ELA [International Institute for Sustainable Development Experimental Lakes Area] will look at the impacts of nanosilver on ecosystems. What happens when it gets into the food chain? And how does it affect plants and animals?

Here’s a video describing the Nanosilver project at the ELA,

You may have noticed a certain tone to the video and it is due to some political shenanigans, which are described in this Aug. 8, 2016 article by Bartley Kives for the Canadian Broadcasting Corporation’s (CBC) online news.

Bionic pancreas tested at home

This news about a bionic pancreas must be exciting for diabetics as it would eliminate the need for constant blood sugar testing throughout the day. From a Dec. 19, 2016 Massachusetts General Hospital news release (also on EurekAlert), Note: Links have been removed,

The bionic pancreas system developed by Boston University (BU) investigators proved better than either conventional or sensor-augmented insulin pump therapy at managing blood sugar levels in patients with type 1 diabetes living at home, with no restrictions, over 11 days. The report of a clinical trial led by a Massachusetts General Hospital (MGH) physician is receiving advance online publication in The Lancet.

“For study participants living at home without limitations on their activity and diet, the bionic pancreas successfully reduced average blood glucose, while at the same time decreasing the risk of hypoglycemia,” says Steven Russell, MD, PhD, of the MGH Diabetes Unit. “This system requires no information other than the patient’s body weight to start, so it will require much less time and effort by health care providers to initiate treatment. And since no carbohydrate counting is required, it significantly reduces the burden on patients associated with diabetes management.”

Developed by Edward Damiano, PhD, and Firas El-Khatib, PhD, of the BU Department of Biomedical Engineering, the bionic pancreas controls patients’ blood sugar with both insulin and glucagon, a hormone that increases glucose levels. After a 2010 clinical trial confirmed that the original version of the device could maintain near-normal blood sugar levels for more than 24 hours in adult patients, two follow-up trials – reported in a 2014 New England Journal of Medicine paper – showed that an updated version of the system successfully controlled blood sugar levels in adults and adolescents for five days.  Another follow-up trial published in The Lancet Diabetes and Endocrinology in 2016  showed it could do the same for children as young as 6 years of age.

While minimal restrictions were placed on participants in the 2014 trials, participants in both spent nights in controlled settings and were accompanied at all times by either a nurse for the adult trial or remained in a diabetes camp for the adolescent and pre-adolescent trials. Participants in the current trial had no such restrictions placed upon them, as they were able to pursue normal activities at home or at work with no imposed limitations on diet or exercise. Patients needed to live within a 30-minute drive of one of the trial sites – MGH, the University of Massachusetts Medical School, Stanford University, and the University of North Carolina at Chapel Hill – and needed to designate a contact person who lived with them and could be contacted by study staff, if necessary.

The bionic pancreas system – the same as that used in the 2014 studies – consisted of a smartphone (iPhone 4S) that could wirelessly communicate with two pumps delivering either insulin or glucagon. Every five minutes the smartphone received a reading from an attached continuous glucose monitor, which was used to calculate and administer a dose of either insulin or glucagon. The algorighms controlling the system were updated for the current trial to better respond to blood sugar variations.

While the device allows participants to enter information about each upcoming meal into a smartphone app, allowing the system to deliver an anticipatory insulin dose, such entries were optional in the current trial. If participants’ blood sugar dropped to dangerous levels or if the monitor or one of the pumps was disconnected for more than 15 minutes, the system would alerted study staff, allowing them to check with the participants or their contact persons.

Study participants were adults who had been diagnosed with type 1 diabetes for a year or more and had used an insulin pump to manage their care for at least six months. Each of 39 participants that finished the study completed two 11-day study periods, one using the bionic pancreas and one using their usual insulin pump and any continous glucose monitor they had been using. In addition to the automated monitoring of glucose levels and administered doses of insulin or glucagon, participants completed daily surveys regarding any episodes of symptomatic hypoglycemia, carbohydrates consumed to treat those episodes, and any episodes of nausea.

On days when participants were on the bionic pancreas, their average blood glucose levels were significantly lower – 141 mg/dl versus 162 mg/dl – than when on their standard treatment. Blood sugar levels were at levels indicating hypoglycemia (less than 60 mg/dl) for 0.6 percent of the time when participants were on the bionic pancreas, versus 1.9 percent of the time on standard treatment. Participants reported fewer episodes of symptomatic hypoglycemia while on the bionic pancreas, and no episodes of severe hypoglycemia were associated with the system.

The system performed even better during the overnight period, when the risk of hypoglycemia is particularly concerning. “Patients with type 1 diabetes worry about developing hypoglycemia when they are sleeping and tend to let their blood sugar run high at night to reduce that risk,” explains Russell, an assistant professor of Medicine at Harvard Medical School. “Our study showed that the bionic pancreas reduced the risk of overnight hypoglycemia to almost nothing without raising the average glucose level. In fact the improvement in average overnight glucose was greater than the improvement in average glucose over the full 24-hour period.”

Damiano, whose work on this project is inspired by his own 17-year-old son’s type 1 diabetes, adds, “The availability of the bionic pancreas would dramatically change the life of people with diabetes by reducing average glucose levels – thereby reducing the risk of diabetes complications – reducing the risk of hypoglycemia, which is a constant fear of patients and their families, and reducing the emotional burden of managing type 1 diabetes.” A co-author of the Lancet report, Damiano is a professor of Biomedical Engineering at Boston University.

The BU patents covering the bionic pancreas have been licensed to Beta Bionics, a startup company co-founded by Damiano and El-Khatib. The company’s latest version of the bionic pancreas, called the iLet, integrates all components into a single unit, which will be tested in future clinical trials. People interested in participating in upcoming trials may contact Russell’s team at the MGH Diabetes Research Center in care of Llazar Cuko (LCUKO@mgh.harvard.edu ).

Here`s a link to and a citation for the paper,

Home use of a bihormonal bionic pancreas versus insulin pump therapy in adults with type 1 diabetes: a multicentre randomised crossover trial by Firas H El-Khatib, Courtney Balliro, Mallory A Hillard, Kendra L Magyar, Laya Ekhlaspour, Manasi Sinha, Debbie Mondesir, Aryan Esmaeili, Celia Hartigan, Michael J Thompson, Samir Malkani, J Paul Lock, David M Harlan, Paula Clinton, Eliana Frank, Darrell M Wilson, Daniel DeSalvo, Lisa Norlander, Trang Ly, Bruce A Buckingham, Jamie Diner, Milana Dezube, Laura A Young, April Goley, M Sue Kirkman, John B Buse, Hui Zheng, Rajendranath R Selagamsetty, Edward R Damiano, Steven J Russell. Lancet DOI: http://dx.doi.org/10.1016/S0140-6736(16)32567-3  Published: 19 December 2016

This paper is behind a paywall.

You can find out more about Beta Bionics and iLet here.

Using acoustic waves to move fluids at the nanoscale

A Nov. 14, 2016 news item on ScienceDaily describes research that could lead to applications useful for ‘lab-on-a-chip’ operations,

A team of mechanical engineers at the University of California San Diego [UCSD] has successfully used acoustic waves to move fluids through small channels at the nanoscale. The breakthrough is a first step toward the manufacturing of small, portable devices that could be used for drug discovery and microrobotics applications. The devices could be integrated in a lab on a chip to sort cells, move liquids, manipulate particles and sense other biological components. For example, it could be used to filter a wide range of particles, such as bacteria, to conduct rapid diagnosis.

A Nov. 14, 2016 UCSD news release (also on EurrekAlert), which originated the news item, provides more information,

The researchers detail their findings in the Nov. 14 issue of Advanced Functional Materials. This is the first time that surface acoustic waves have been used at the nanoscale.

The field of nanofluidics has long struggled with moving fluids within channels that are 1000 times smaller than the width of a hair, said James Friend, a professor and materials science expert at the Jacobs School of Engineering at UC San Diego. Current methods require bulky and expensive equipment as well as high temperatures. Moving fluid out of a channel that’s just a few nanometers high requires pressures of 1 megaPascal, or the equivalent of 10 atmospheres.

Researchers led by Friend had tried to use acoustic waves to move the fluids along at the nano scale for several years. They also wanted to do this with a device that could be manufactured at room temperature.

After a year of experimenting, post-doctoral researcher Morteza Miansari, now at Stanford, was able to build a device made of lithium niobate with nanoscale channels where fluids can be moved by surface acoustic waves. This was made possible by a new method Miansari developed to bond the material to itself at room temperature.  The fabrication method can be easily scaled up, which would lower manufacturing costs. Building one device would cost $1000 but building 100,000 would drive the price down to $1 each.

The device is compatible with biological materials, cells and molecules.

Researchers used acoustic waves with a frequency of 20 megaHertz to manipulate fluids, droplets and particles in nanoslits that are 50 to 250 nanometers tall. To fill the channels, researchers applied the acoustic waves in the same direction as the fluid moving into the channels. To drain the channels, the sound waves were applied in the opposite direction.

By changing the height of the channels, the device could be used to filter a wide range of particles, down to large biomolecules such as siRNA, which would not fit in the slits. Essentially, the acoustic waves would drive fluids containing the particles into these channels. But while the fluid would go through, the particles would be left behind and form a dry mass. This could be used for rapid diagnosis in the field.

Here’s a link to and a citation for the paper,

Acoustic Nanofluidics via Room-Temperature Lithium Niobate Bonding: A Platform for Actuation and Manipulation of Nanoconfined Fluids and Particles by Morteza Miansari and James R. Friend. Advanced Functional Materials DOI: 10.1002/adfm.201602425 Version of Record online: 20 SEP 2016
© 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

They do have an animation sequence illustrating the work but it could be considered suggestive and is, weirdly, silent,

 

 

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Innovation and two Canadian universities

I have two news bits and both concern the Canadian universities, the University of British Columbia (UBC) and the University of Toronto (UofT).

Creative Destruction Lab – West

First, the Creative Destruction Lab, a technology commercialization effort based at UofT’s Rotman School of Management, is opening an office in the west according to a Sept. 28, 2016 UBC media release (received via email; Note: Links have been removed; this is a long media release which interestingly does not mention Joseph Schumpeter the man who developed the economic theory which he called: creative destruction),

The UBC Sauder School of Business is launching the Western Canadian version of the Creative Destruction Lab, a successful seed-stage program based at UofT’s Rotman School of Management, to help high-technology ventures driven by university research maximize their commercial impact and benefit to society.

“Creative Destruction Lab – West will provide a much-needed support system to ensure innovations formulated on British Columbia campuses can access the funding they need to scale up and grow in-province,” said Robert Helsley, Dean of the UBC Sauder School of Business. “The success our partners at Rotman have had in helping commercialize the scientific breakthroughs of Canadian talent is remarkable and is exactly what we plan to replicate at UBC Sauder.”

Between 2012 and 2016, companies from CDL’s first four years generated over $800 million in equity value. It has supported a long line of emerging startups, including computer-human interface company Thalmic Labs, which announced nearly USD $120 million in funding on September 19, one of the largest Series B financings in Canadian history.

Focusing on massively scalable high-tech startups, CDL-West will provide coaching from world-leading entrepreneurs, support from dedicated business and science faculty, and access to venture capital. While some of the ventures will originate at UBC, CDL-West will also serve the entire province and extended western region by welcoming ventures from other universities. The program will closely align with existing entrepreneurship programs across UBC, including, e@UBC and HATCH, and actively work with the BC Tech Association [also known as the BC Technology Industry Association] and other partners to offer a critical next step in the venture creation process.

“We created a model for tech venture creation that keeps startups focused on their essential business challenges and dedicated to solving them with world-class support,” said CDL Founder Ajay Agrawal, a professor at the Rotman School of Management and UBC PhD alumnus.

“By partnering with UBC Sauder, we will magnify the impact of CDL by drawing in ventures from one of the country’s other leading research universities and B.C.’s burgeoning startup scene to further build the country’s tech sector and the opportunities for job creation it provides,” said CDL Director, Rachel Harris.

CDL uses a goal-setting model to push ventures along a path toward success. Over nine months, a collective of leading entrepreneurs with experience building and scaling technology companies – called the G7 – sets targets for ventures to hit every eight weeks, with the goal of maximizing their equity-value. Along the way ventures turn to business and technology experts for strategic guidance on how to reach goals, and draw on dedicated UBC Sauder students who apply state-of the-art business skills to help companies decide which market to enter first and how.

Ventures that fail to achieve milestones – approximately 50 per cent in past cohorts – are cut from the process. Those that reach their objectives and graduate from the program attract investment from the G7, as well as other leading venture-capital firms.

Currently being assembled, the CDL-West G7 will be comprised of entrepreneurial luminaries, including Jeff Mallett, the founding President, COO and Director of Yahoo! Inc. from 1995-2002 – a company he led to $4 billion in revenues and grew from a startup to a publicly traded company whose value reached $135 billion. He is now Managing Director of Iconica Partners and Managing Partner of Mallett Sports & Entertainment, with ventures including the San Francisco Giants, AT&T Park and Mission Rock Development, Comcast Bay Area Sports Network, the San Jose Giants, Major League Soccer, Vancouver Whitecaps FC, and a variety of other sports and online ventures.

Already bearing fruit, the Creative Destruction Lab partnership will see several UBC ventures accepted into a Machine Learning Specialist Track run by Rotman’s CDL this fall. This track is designed to create a support network for enterprises focused on artificial intelligence, a research strength at UofT and Canada more generally, which has traditionally migrated to the United States for funding and commercialization. In its second year, CDL-West will launch its own specialist track in an area of strength at UBC that will draw eastern ventures west.

“This new partnership creates the kind of high impact innovation network the Government of Canada wants to encourage,” said Brandon Lee, Canada’s Consul General in San Francisco, who works to connect Canadian innovation to customers and growth capital opportunities in Silicon Valley. “By collaborating across our universities to enhance our capacity to turn the scientific discoveries into businesses in Canada, we can further advance our nation’s global competitiveness in the knowledge-based industries.”

The Creative Destruction Lab is guided by an Advisory Board, co-chaired by Vancouver-based Haig Farris, a pioneer of the Canadian venture capitalist industry, and Bill Graham, Chancellor of Trinity College at UofT and former Canadian cabinet minister.

“By partnering with Rotman, UBC Sauder will be able to scale up its support for high-tech ventures extremely quickly and with tremendous impact,” said Paul Cubbon, Leader of CDL-West and a faculty member at UBC Sauder. “CDL-West will act as a turbo booster for ventures with great ideas, but which lack the strategic roadmap and funding to make them a reality.”

CDL-West launched its competitive application process for the first round of ventures that will begin in January 2017. Interested ventures are encouraged to submit applications via the CDL website at: www.creativedestructionlab.com

Background

UBC Technology ventures represented at media availability

Awake Labs is a wearable technology startup whose products measure and track anxiety in people with Autism Spectrum Disorder to better understand behaviour. Their first device, Reveal, monitors a wearer’s heart-rate, body temperature and sweat levels using high-tech sensors to provide insight into care and promote long term independence.

Acuva Technologies is a Vancouver-based clean technology venture focused on commercializing breakthrough UltraViolet Light Emitting Diode technology for water purification systems. Initially focused on point of use systems for boats, RVs and off grid homes in North American market, where they already have early sales, the company’s goal is to enable water purification in households in developing countries by 2018 and deploy large scale systems by 2021.

Other members of the CDL-West G7 include:

Boris Wertz: One of the top tech early-stage investors in North America and the founding partner of Version One, Wertz is also a board partner with Andreessen Horowitz. Before becoming an investor, Wertz was the Chief Operating Officer of AbeBooks.com, which sold to Amazon in 2008. He was responsible for marketing, business development, product, customer service and international operations. His deep operational experience helps him guide other entrepreneurs to start, build and scale companies.

Lisa Shields: Founder of Hyperwallet Systems Inc., Shields guided Hyperwallet from a technology startup to the leading international payments processor for business to consumer mass payouts. Prior to founding Hyperwallet, Lisa managed payments acceptance and risk management technology teams for high-volume online merchants. She was the founding director of the Wireless Innovation Society of British Columbia and is driven by the social and economic imperatives that shape global payment technologies.

Jeff Booth: Co-founder, President and CEO of Build Direct, a rapidly growing online supplier of home improvement products. Through custom and proprietary web analytics and forecasting tools, BuildDirect is reinventing and redefining how consumers can receive the best prices. BuildDirect has 12 warehouse locations across North America and is headquartered in Vancouver, BC. In 2015, Booth was awarded the BC Technology ‘Person of the Year’ Award by the BC Technology Industry Association.

Education:

CDL-west will provide a transformational experience for MBA and senior undergraduate students at UBC Sauder who will act as venture advisors. Replacing traditional classes, students learn by doing during the process of rapid equity-value creation.

Supporting venture development at UBC:

CDL-west will work closely with venture creation programs across UBC to complete the continuum of support aimed at maximizing venture value and investment. It will draw in ventures that are being or have been supported and developed in programs that span campus, including:

University Industry Liaison Office which works to enable research and innovation partnerships with industry, entrepreneurs, government and non-profit organizations.

e@UBC which provides a combination of mentorship, education, venture creation, and seed funding to support UBC students, alumni, faculty and staff.

HATCH, a UBC technology incubator which leverages the expertise of the UBC Sauder School of Business and entrepreneurship@UBC and a seasoned team of domain-specific experts to provide real-world, hands-on guidance in moving from innovative concept to successful venture.

Coast Capital Savings Innovation Hub, a program base at the UBC Sauder Centre for Social Innovation & Impact Investing focused on developing ventures with the goal of creating positive social and environmental impact.

About the Creative Destruction Lab in Toronto:

The Creative Destruction Lab leverages the Rotman School’s leading faculty and industry network as well as its location in the heart of Canada’s business capital to accelerate massively scalable, technology-based ventures that have the potential to transform our social, industrial, and economic landscape. The Lab has had a material impact on many nascent startups, including Deep Genomics, Greenlid, Atomwise, Bridgit, Kepler Communications, Nymi, NVBots, OTI Lumionics, PUSH, Thalmic Labs, Vertical.ai, Revlo, Validere, Growsumo, and VoteCompass, among others. For more information, visit www.creativedestructionlab.com

About the UBC Sauder School of Business

The UBC Sauder School of Business is committed to developing transformational and responsible business leaders for British Columbia and the world. Located in Vancouver, Canada’s gateway to the Pacific Rim, the school is distinguished for its long history of partnership and engagement in Asia, the excellence of its graduates, and the impact of its research which ranks in the top 20 globally. For more information, visit www.sauder.ubc.ca

About the Rotman School of Management

The Rotman School of Management is located in the heart of Canada’s commercial and cultural capital and is part of the University of Toronto, one of the world’s top 20 research universities. The Rotman School fosters a new way to think that enables graduates to tackle today’s global business and societal challenges. For more information, visit www.rotman.utoronto.ca.

It’s good to see a couple of successful (according to the news release) local entrepreneurs on the board although I’m somewhat puzzled by Mallett’s presence since, if memory serves, Yahoo! was not doing that well when he left in 2002. The company was an early success but utterly dwarfed by Google at some point in the early 2000s and these days, its stock (both financial and social) has continued to drift downwards. As for Mallett’s current successes, there is no mention of them.

Reuters Top 100 of the world’s most innovative universities

After reading or skimming through the CDL-West news you might think that the University of Toronto ranked higher than UBC on the Reuters list of the world’s most innovative universities. Before breaking the news about the Canadian rankings, here’s more about the list from a Sept, 28, 2016 Reuters news release (receive via email),

Stanford University, the Massachusetts Institute of Technology and Harvard University top the second annual Reuters Top 100 ranking of the world’s most innovative universities. The Reuters Top 100 ranking aims to identify the institutions doing the most to advance science, invent new technologies and help drive the global economy. Unlike other rankings that often rely entirely or in part on subjective surveys, the ranking uses proprietary data and analysis tools from the Intellectual Property & Science division of Thomson Reuters to examine a series of patent and research-related metrics, and get to the essence of what it means to be truly innovative.

In the fast-changing world of science and technology, if you’re not innovating, you’re falling behind. That’s one of the key findings of this year’s Reuters 100. The 2016 results show that big breakthroughs – even just one highly influential paper or patent – can drive a university way up the list, but when that discovery fades into the past, so does its ranking. Consistency is key, with truly innovative institutions putting out groundbreaking work year after year.

Stanford held fast to its first place ranking by consistently producing new patents and papers that influence researchers elsewhere in academia and in private industry. Researchers at the Massachusetts Institute of Technology (ranked #2) were behind some of the most important innovations of the past century, including the development of digital computers and the completion of the Human Genome Project. Harvard University (ranked #3), is the oldest institution of higher education in the United States, and has produced 47 Nobel laureates over the course of its 380-year history.

Some universities saw significant movement up the list, including, most notably, the University of Chicago, which jumped from #71 last year to #47 in 2016. Other list-climbers include the Netherlands’ Delft University of Technology (#73 to #44) and South Korea’s Sungkyunkwan University (#66 to #46).

The United States continues to dominate the list, with 46 universities in the top 100; Japan is once again the second best performing country, with nine universities. France and South Korea are tied in third, each with eight. Germany has seven ranked universities; the United Kingdom has five; Switzerland, Belgium and Israel have three; Denmark, China and Canada have two; and the Netherlands and Singapore each have one.

You can find the rankings here (scroll down about 75% of the way) and for the impatient, the University of British Columbia ranked 50th and the University of Toronto 57th.

The biggest surprise for me was that China, like Canada, had two universities on the list. I imagine that will change as China continues its quest for science and innovation dominance. Given how they tout their innovation prowess, I had one other surprise, the University of Waterloo’s absence.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Cooling the skin with plastic clothing

Rather that cooling or heating an entire room, why not cool or heat the person? Engineers at Stanford University (California, US) have developed a material that helps with half of that premise: cooling. From a Sept. 1, 2016 news item on ScienceDaily,

Stanford engineers have developed a low-cost, plastic-based textile that, if woven into clothing, could cool your body far more efficiently than is possible with the natural or synthetic fabrics in clothes we wear today.

Describing their work in Science, the researchers suggest that this new family of fabrics could become the basis for garments that keep people cool in hot climates without air conditioning.

“If you can cool the person rather than the building where they work or live, that will save energy,” said Yi Cui, an associate professor of materials science and engineering and of photon science at Stanford.

A Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate, which originated the news item, further explains the information in the video,

This new material works by allowing the body to discharge heat in two ways that would make the wearer feel nearly 4 degrees Fahrenheit cooler than if they wore cotton clothing.

The material cools by letting perspiration evaporate through the material, something ordinary fabrics already do. But the Stanford material provides a second, revolutionary cooling mechanism: allowing heat that the body emits as infrared radiation to pass through the plastic textile.

All objects, including our bodies, throw off heat in the form of infrared radiation, an invisible and benign wavelength of light. Blankets warm us by trapping infrared heat emissions close to the body. This thermal radiation escaping from our bodies is what makes us visible in the dark through night-vision goggles.

“Forty to 60 percent of our body heat is dissipated as infrared radiation when we are sitting in an office,” said Shanhui Fan, a professor of electrical engineering who specializes in photonics, which is the study of visible and invisible light. “But until now there has been little or no research on designing the thermal radiation characteristics of textiles.”

Super-powered kitchen wrap

To develop their cooling textile, the Stanford researchers blended nanotechnology, photonics and chemistry to give polyethylene – the clear, clingy plastic we use as kitchen wrap – a number of characteristics desirable in clothing material: It allows thermal radiation, air and water vapor to pass right through, and it is opaque to visible light.

The easiest attribute was allowing infrared radiation to pass through the material, because this is a characteristic of ordinary polyethylene food wrap. Of course, kitchen plastic is impervious to water and is see-through as well, rendering it useless as clothing.

The Stanford researchers tackled these deficiencies one at a time.

First, they found a variant of polyethylene commonly used in battery making that has a specific nanostructure that is opaque to visible light yet is transparent to infrared radiation, which could let body heat escape. This provided a base material that was opaque to visible light for the sake of modesty but thermally transparent for purposes of energy efficiency.

They then modified the industrial polyethylene by treating it with benign chemicals to enable water vapor molecules to evaporate through nanopores in the plastic, said postdoctoral scholar and team member Po-Chun Hsu, allowing the plastic to breathe like a natural fiber.

Making clothes

That success gave the researchers a single-sheet material that met their three basic criteria for a cooling fabric. To make this thin material more fabric-like, they created a three-ply version: two sheets of treated polyethylene separated by a cotton mesh for strength and thickness.

To test the cooling potential of their three-ply construct versus a cotton fabric of comparable thickness, they placed a small swatch of each material on a surface that was as warm as bare skin and measured how much heat each material trapped.

“Wearing anything traps some heat and makes the skin warmer,” Fan said. “If dissipating thermal radiation were our only concern, then it would be best to wear nothing.”

The comparison showed that the cotton fabric made the skin surface 3.6 F warmer than their cooling textile. The researchers said this difference means that a person dressed in their new material might feel less inclined to turn on a fan or air conditioner.

The researchers are continuing their work on several fronts, including adding more colors, textures and cloth-like characteristics to their material. Adapting a material already mass produced for the battery industry could make it easier to create products.

“If you want to make a textile, you have to be able to make huge volumes inexpensively,” Cui said.

Fan believes that this research opens up new avenues of inquiry to cool or heat things, passively, without the use of outside energy, by tuning materials to dissipate or trap infrared radiation.

“In hindsight, some of what we’ve done looks very simple, but it’s because few have really been looking at engineering the radiation characteristics of textiles,” he said.

Dexter Johnson (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website) has written a Sept. 2, 2016 posting where he provides more technical detail about this work,

The nanoPE [nanoporous polyethylene] material is able to achieve this release of the IR heat because of the size of the interconnected pores. The pores can range in size from 50 to 1000 nanometers. They’re therefore comparable in size to wavelengths of visible light, which allows the material to scatter that light. However, because the pores are much smaller than the wavelength of infrared light, the nanoPE is transparent to the IR.

It is this combination of blocking visible light and allowing IR to pass through that distinguishes the nanoPE material from regular polyethylene, which allows similar amounts of IR to pass through, but can only block 20 percent of the visible light compared to nanoPE’s 99 percent opacity.

The Stanford researchers were also able to improve on the water wicking capability of the nanoPE material by using a microneedle punching technique and coating the material with a water-repelling agent. The result is that perspiration can evaporate through the material unlike with regular polyethylene.

For those who wish to further pursue their interest, Dexter has a lively writing style and he provides more detail and insight in his posting.

Here’s a link to and a citation for the paper,

Radiative human body cooling by nanoporous polyethylene textile by Po-Chun Hsu, Alex Y. Song, Peter B. Catrysse, Chong Liu, Yucan Peng, Jin Xie, Shanhui Fan, Yi Cui. Science  02 Sep 2016: Vol. 353, Issue 6303, pp. 1019-1023 DOI: 10.1126/science.aaf5471

This paper is open access.

Oily nanodiamonds

Nanodiamonds if successfully extracted from oil could be used for imaging and communications and the world’s leading program for extracting nanodiamonds (also known as diamondoids) is in California (US). From a May 12, 2016 news item on Nanowerk,

Stanford and SLAC National Accelerator Laboratory jointly run the world’s leading program for isolating and studying diamondoids — the tiniest possible specks of diamond. Found naturally in petroleum fluids, these interlocking carbon cages weigh less than a billionth of a billionth of a carat (a carat weighs about the same as 12 grains of rice); the smallest ones contain just 10 atoms.

Over the past decade, a team led by two Stanford-SLAC faculty members — Nick Melosh, an associate professor of materials science and engineering and of photon science, and Zhi-Xun Shen, a professor of photon science and of physics and applied physics – has found potential roles for diamondoids in improving electron microscope images, assembling materials and printing circuits on computer chips. The team’s work takes place within SIMES, the Stanford Institute for Materials and Energy Sciences, which is run jointly with SLAC.

Close-up of purified diamondoids on a lab bench. Too small to see with the naked eye, diamondoids are visible only when they clump together in fine, sugar-like crystals like these. Photo: Christopher Smith, SLAC National Accelerator Laboratory

Close-up of purified diamondoids on a lab bench. Too small to see with the naked eye, diamondoids are visible only when they clump together in fine, sugar-like crystals like these. Photo: Christopher Smith, SLAC National Accelerator Laboratory

A March 31, 2016 Stanford University news release by Glennda Chui, which originated the news item, describes the work in more detail,

Before they can do that [use nanodiamonds in imaging and other applications], though, just getting the diamondoids is a technical feat. It starts at the nearby Chevron refinery in Richmond, California, with a railroad tank car full of crude oil from the Gulf of Mexico. “We analyzed more than a thousand oils from around the world to see which had the highest concentrations of diamondoids,” says Jeremy Dahl, who developed key diamondoid isolation techniques with fellow Chevron researcher Robert Carlson before both came to Stanford — Dahl as a physical science research associate and Carlson as a visiting scientist.

The original isolation steps were carried out at the Chevron refinery, where the selected crudes were boiled in huge pots to concentrate the diamondoids. Some of the residue from that work came to a SLAC lab, where small batches are repeatedly boiled to evaporate and isolate molecules of specific weights. These fluids are then forced at high pressure through sophisticated filtration systems to separate out diamondoids of different sizes and shapes, each of which has different properties.

The diamondoids themselves are invisible to the eye; the only reason we can see them is that they clump together in fine, sugar-like crystals. “If you had a spoonful,” Dahl says, holding a few in his palm, “you could give 100 billion of them to every person on Earth and still have some left over.”

Recently, the team started using diamondoids to seed the growth of flawless, nano-sized diamonds in a lab at Stanford. By introducing other elements, such as silicon or nickel, during the growing process, they hope to make nanodiamonds with precisely tailored flaws that can produce single photons of light for next-generation optical communications and biological imaging.

Early results show that the quality of optical materials grown from diamondoid seeds is consistently high, says Stanford’s Jelena Vuckovic, a professor of electrical engineering who is leading this part of the research with Steven Chu, professor of physics and of molecular and cellular physiology.

“Developing a reliable way of growing the nanodiamonds is critical,” says Vuckovic, who is also a member of Stanford Bio-X. “And it’s really great to have that source and the grower right here at Stanford. Our collaborators grow the material, we characterize it and we give them feedback right away. They can change whatever we want them to change.”

The song is you: a McGill University, University of Cambridge, and Stanford University research collaboration

These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,

A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.

A May 10, 2016 McGill University news release, which originated the news item, provides some fascinating suggestions for new categories for music,

There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.

The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.

The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”

The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.

The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July [2015], they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October [2015], they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.

Readers can find out how they score on the music and personality quizzes at www.musicaluniverse.org.

David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”

Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”

William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,

Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].

This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.

Here’s a link to and a citation for the latest paper,

The Song Is You: Preferences for Musical Attribute Dimensions Reflect Personality by David M. Greenberg, Michal Kosinski, David J. Stillwell, Brian L. Monteiro, Daniel J. Levitin, and Peter J. Rentfrow. Social Psychological and Personality Science, 1948550616641473, first published on May 9, 2016

This paper is behind a paywall.

Here’s a link to and a citation for the October 2015 paper

Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.

The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.

Here’s a link to and a citation for the July 2015 paper,

Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE]  http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015

This paper is open access.

I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.