Tag Archives: Stanford University

Westworld: a US television programme investigating AI (artificial intelligence) and consciousness

The US television network, Home Box Office (HBO) is getting ready to première Westworld, a new series based on a movie first released in 1973. Here’s more about the movie from its Wikipedia entry (Note: Links have been removed),

Westworld is a 1973 science fiction Western thriller film written and directed by novelist Michael Crichton and produced by Paul Lazarus III about amusement park robots that malfunction and begin killing visitors. It stars Yul Brynner as an android in a futuristic Western-themed amusement park, and Richard Benjamin and James Brolin as guests of the park.

Westworld was the first theatrical feature directed by Michael Crichton.[3] It was also the first feature film to use digital image processing, to pixellate photography to simulate an android point of view.[4] The film was nominated for Hugo, Nebula and Saturn awards, and was followed by a sequel film, Futureworld, and a short-lived television series, Beyond Westworld. In August 2013, HBO announced plans for a television series based on the original film.

The latest version is due to start broadcasting in the US on Sunday, Oct. 2, 2016 and as part of the publicity effort the producers are profiled by Sean Captain for Fast Company in a Sept. 30, 2016 article,

As Game of Thrones marches into its final seasons, HBO is debuting this Sunday what it hopes—and is betting millions of dollars on—will be its new blockbuster series: Westworld, a thorough reimagining of Michael Crichton’s 1973 cult classic film about a Western theme park populated by lifelike robot hosts. A philosophical prelude to Jurassic Park, Crichton’s Westworld is a cautionary tale about technology gone very wrong: the classic tale of robots that rise up and kill the humans. HBO’s new series, starring Evan Rachel Wood, Anthony Hopkins, and Ed Harris, is subtler and also darker: The humans are the scary ones.

“We subverted the entire premise of Westworld in that our sympathies are meant to be with the robots, the hosts,” says series co-creator Lisa Joy. She’s sitting on a couch in her Burbank office next to her partner in life and on the show—writer, director, producer, and husband Jonathan Nolan—who goes by Jonah. …

Their Westworld, which runs in the revered Sunday-night 9 p.m. time slot, combines present-day production values and futuristic technological visions—thoroughly revamping Crichton’s story with hybrid mechanical-biological robots [emphasis mine] fumbling along the blurry line between simulated and actual consciousness.

Captain never does explain the “hybrid mechanical-biological robots.” For example, do they have human skin or other organs grown for use in a robot? In other words, how are they hybrid?

That nitpick aside, the article provides some interesting nuggets of information and insight into the themes and ideas 2016 Westworld’s creators are exploring (Note: A link has been removed),

… Based on the four episodes I previewed (which get progressively more interesting), Westworld does a good job with the trope—which focused especially on the awakening of Dolores, an old soul of a robot played by Evan Rachel Wood. Dolores is also the catchall Spanish word for suffering, pain, grief, and other displeasures. “There are no coincidences in Westworld,” says Joy, noting that the name is also a play on Dolly, the first cloned mammal.

The show operates on a deeper, though hard-to-define level, that runs beneath the shoot-em and screw-em frontier adventure and robotic enlightenment narratives. It’s an allegory of how even today’s artificial intelligence is already taking over, by cataloging and monetizing our lives and identities. “Google and Facebook, their business is reading your mind in order to advertise shit to you,” says Jonah Nolan. …

“Exist free of rules, laws or judgment. No impulse is taboo,” reads a spoof home page for the resort that HBO launched a few weeks ago. That’s lived to the fullest by the park’s utterly sadistic loyal guest, played by Ed Harris and known only as the Man in Black.

The article also features some quotes from scientists on the topic of artificial intelligence (Note: Links have been removed),

“In some sense, being human, but less than human, it’s a good thing,” says Jon Gratch, professor of computer science and psychology at the University of Southern California [USC]. Gratch directs research at the university’s Institute for Creative Technologies on “virtual humans,” AI-driven onscreen avatars used in military-funded training programs. One of the projects, SimSensei, features an avatar of a sympathetic female therapist, Ellie. It uses AI and sensors to interpret facial expressions, posture, tension in the voice, and word choices by users in order to direct a conversation with them.

“One of the things that we’ve found is that people don’t feel like they’re being judged by this character,” says Gratch. In work with a National Guard unit, Ellie elicited more honest responses about their psychological stresses than a web form did, he says. Other data show that people are more honest when they know the avatar is controlled by an AI versus being told that it was controlled remotely by a human mental health clinician.

“If you build it like a human, and it can interact like a human. That solves a lot of the human-computer or human-robot interaction issues,” says professor Paul Rosenbloom, also with USC’s Institute for Creative Technologies. He works on artificial general intelligence, or AGI—the effort to create a human-like or human level of intellect.

Rosenbloom is building an AGI platform called Sigma that models human cognition, including emotions. These could make a more effective robotic tutor, for instance, “There are times you want the person to know you are unhappy with them, times you want them to know that you think they’re doing great,” he says, where “you” is the AI programmer. “And there’s an emotional component as well as the content.”

Achieving full AGI could take a long time, says Rosenbloom, perhaps a century. Bernie Meyerson, IBM’s chief innovation officer, is also circumspect in predicting if or when Watson could evolve into something like HAL or Her. “Boy, we are so far from that reality, or even that possibility, that it becomes ludicrous trying to get hung up there, when we’re trying to get something to reasonably deal with fact-based data,” he says.

Gratch, Rosenbloom, and Meyerson are talking about screen-based entities and concepts of consciousness and emotions. Then, there’s a scientist who’s talking about the difficulties with robots,

… Ken Goldberg, an artist and professor of engineering at UC [University of California] Berkeley, calls the notion of cyborg robots in Westworld “a pretty common trope in science fiction.” (Joy will take up the theme again, as the screenwriter for a new Battlestar Galactica movie.) Goldberg’s lab is struggling just to build and program a robotic hand that can reliably pick things up. But a sympathetic, somewhat believable Dolores in a virtual setting is not so farfetched.

Captain delves further into a thorny issue,

“Can simulations, at some point, become the real thing?” asks Patrick Lin, director of the Ethics + Emerging Sciences Group at California Polytechnic State University. “If we perfectly simulate a rainstorm on a computer, it’s still not a rainstorm. We won’t get wet. But is the mind or consciousness different? The jury is still out.”

While artificial consciousness is still in the dreamy phase, today’s level of AI is serious business. “What was sort of a highfalutin philosophical question a few years ago has become an urgent industrial need,” says Jonah Nolan. It’s not clear yet how the Delos management intends, beyond entrance fees, to monetize Westworld, although you get a hint when Ford tells Theresa Cullen “We know everything about our guests, don’t we? As we know everything about our employees.”

AI has a clear moneymaking model in this world, according to Nolan. “Facebook is monetizing your social graph, and Google is advertising to you.” Both companies (and others) are investing in AI to better understand users and find ways to make money off this knowledge. …

As my colleague David Bruggeman has often noted on his Pasco Phronesis blog, there’s a lot of science on television.

For anyone who’s interested in artificial intelligence and the effects it may have on urban life, see my Sept. 27, 2016 posting featuring the ‘One Hundred Year Study on Artificial Intelligence (AI100)’, hosted by Stanford University.

Points to anyone who recognized Jonah (Jonathan) Nolan as the producer for the US television series, Person of Interest, a programme based on the concept of a supercomputer with intelligence and personality and the ability to continuously monitor the population 24/7.

Innovation and two Canadian universities

I have two news bits and both concern the Canadian universities, the University of British Columbia (UBC) and the University of Toronto (UofT).

Creative Destruction Lab – West

First, the Creative Destruction Lab, a technology commercialization effort based at UofT’s Rotman School of Management, is opening an office in the west according to a Sept. 28, 2016 UBC media release (received via email; Note: Links have been removed; this is a long media release which interestingly does not mention Joseph Schumpeter the man who developed the economic theory which he called: creative destruction),

The UBC Sauder School of Business is launching the Western Canadian version of the Creative Destruction Lab, a successful seed-stage program based at UofT’s Rotman School of Management, to help high-technology ventures driven by university research maximize their commercial impact and benefit to society.

“Creative Destruction Lab – West will provide a much-needed support system to ensure innovations formulated on British Columbia campuses can access the funding they need to scale up and grow in-province,” said Robert Helsley, Dean of the UBC Sauder School of Business. “The success our partners at Rotman have had in helping commercialize the scientific breakthroughs of Canadian talent is remarkable and is exactly what we plan to replicate at UBC Sauder.”

Between 2012 and 2016, companies from CDL’s first four years generated over $800 million in equity value. It has supported a long line of emerging startups, including computer-human interface company Thalmic Labs, which announced nearly USD $120 million in funding on September 19, one of the largest Series B financings in Canadian history.

Focusing on massively scalable high-tech startups, CDL-West will provide coaching from world-leading entrepreneurs, support from dedicated business and science faculty, and access to venture capital. While some of the ventures will originate at UBC, CDL-West will also serve the entire province and extended western region by welcoming ventures from other universities. The program will closely align with existing entrepreneurship programs across UBC, including, e@UBC and HATCH, and actively work with the BC Tech Association [also known as the BC Technology Industry Association] and other partners to offer a critical next step in the venture creation process.

“We created a model for tech venture creation that keeps startups focused on their essential business challenges and dedicated to solving them with world-class support,” said CDL Founder Ajay Agrawal, a professor at the Rotman School of Management and UBC PhD alumnus.

“By partnering with UBC Sauder, we will magnify the impact of CDL by drawing in ventures from one of the country’s other leading research universities and B.C.’s burgeoning startup scene to further build the country’s tech sector and the opportunities for job creation it provides,” said CDL Director, Rachel Harris.

CDL uses a goal-setting model to push ventures along a path toward success. Over nine months, a collective of leading entrepreneurs with experience building and scaling technology companies – called the G7 – sets targets for ventures to hit every eight weeks, with the goal of maximizing their equity-value. Along the way ventures turn to business and technology experts for strategic guidance on how to reach goals, and draw on dedicated UBC Sauder students who apply state-of the-art business skills to help companies decide which market to enter first and how.

Ventures that fail to achieve milestones – approximately 50 per cent in past cohorts – are cut from the process. Those that reach their objectives and graduate from the program attract investment from the G7, as well as other leading venture-capital firms.

Currently being assembled, the CDL-West G7 will be comprised of entrepreneurial luminaries, including Jeff Mallett, the founding President, COO and Director of Yahoo! Inc. from 1995-2002 – a company he led to $4 billion in revenues and grew from a startup to a publicly traded company whose value reached $135 billion. He is now Managing Director of Iconica Partners and Managing Partner of Mallett Sports & Entertainment, with ventures including the San Francisco Giants, AT&T Park and Mission Rock Development, Comcast Bay Area Sports Network, the San Jose Giants, Major League Soccer, Vancouver Whitecaps FC, and a variety of other sports and online ventures.

Already bearing fruit, the Creative Destruction Lab partnership will see several UBC ventures accepted into a Machine Learning Specialist Track run by Rotman’s CDL this fall. This track is designed to create a support network for enterprises focused on artificial intelligence, a research strength at UofT and Canada more generally, which has traditionally migrated to the United States for funding and commercialization. In its second year, CDL-West will launch its own specialist track in an area of strength at UBC that will draw eastern ventures west.

“This new partnership creates the kind of high impact innovation network the Government of Canada wants to encourage,” said Brandon Lee, Canada’s Consul General in San Francisco, who works to connect Canadian innovation to customers and growth capital opportunities in Silicon Valley. “By collaborating across our universities to enhance our capacity to turn the scientific discoveries into businesses in Canada, we can further advance our nation’s global competitiveness in the knowledge-based industries.”

The Creative Destruction Lab is guided by an Advisory Board, co-chaired by Vancouver-based Haig Farris, a pioneer of the Canadian venture capitalist industry, and Bill Graham, Chancellor of Trinity College at UofT and former Canadian cabinet minister.

“By partnering with Rotman, UBC Sauder will be able to scale up its support for high-tech ventures extremely quickly and with tremendous impact,” said Paul Cubbon, Leader of CDL-West and a faculty member at UBC Sauder. “CDL-West will act as a turbo booster for ventures with great ideas, but which lack the strategic roadmap and funding to make them a reality.”

CDL-West launched its competitive application process for the first round of ventures that will begin in January 2017. Interested ventures are encouraged to submit applications via the CDL website at: www.creativedestructionlab.com


UBC Technology ventures represented at media availability

Awake Labs is a wearable technology startup whose products measure and track anxiety in people with Autism Spectrum Disorder to better understand behaviour. Their first device, Reveal, monitors a wearer’s heart-rate, body temperature and sweat levels using high-tech sensors to provide insight into care and promote long term independence.

Acuva Technologies is a Vancouver-based clean technology venture focused on commercializing breakthrough UltraViolet Light Emitting Diode technology for water purification systems. Initially focused on point of use systems for boats, RVs and off grid homes in North American market, where they already have early sales, the company’s goal is to enable water purification in households in developing countries by 2018 and deploy large scale systems by 2021.

Other members of the CDL-West G7 include:

Boris Wertz: One of the top tech early-stage investors in North America and the founding partner of Version One, Wertz is also a board partner with Andreessen Horowitz. Before becoming an investor, Wertz was the Chief Operating Officer of AbeBooks.com, which sold to Amazon in 2008. He was responsible for marketing, business development, product, customer service and international operations. His deep operational experience helps him guide other entrepreneurs to start, build and scale companies.

Lisa Shields: Founder of Hyperwallet Systems Inc., Shields guided Hyperwallet from a technology startup to the leading international payments processor for business to consumer mass payouts. Prior to founding Hyperwallet, Lisa managed payments acceptance and risk management technology teams for high-volume online merchants. She was the founding director of the Wireless Innovation Society of British Columbia and is driven by the social and economic imperatives that shape global payment technologies.

Jeff Booth: Co-founder, President and CEO of Build Direct, a rapidly growing online supplier of home improvement products. Through custom and proprietary web analytics and forecasting tools, BuildDirect is reinventing and redefining how consumers can receive the best prices. BuildDirect has 12 warehouse locations across North America and is headquartered in Vancouver, BC. In 2015, Booth was awarded the BC Technology ‘Person of the Year’ Award by the BC Technology Industry Association.


CDL-west will provide a transformational experience for MBA and senior undergraduate students at UBC Sauder who will act as venture advisors. Replacing traditional classes, students learn by doing during the process of rapid equity-value creation.

Supporting venture development at UBC:

CDL-west will work closely with venture creation programs across UBC to complete the continuum of support aimed at maximizing venture value and investment. It will draw in ventures that are being or have been supported and developed in programs that span campus, including:

University Industry Liaison Office which works to enable research and innovation partnerships with industry, entrepreneurs, government and non-profit organizations.

e@UBC which provides a combination of mentorship, education, venture creation, and seed funding to support UBC students, alumni, faculty and staff.

HATCH, a UBC technology incubator which leverages the expertise of the UBC Sauder School of Business and entrepreneurship@UBC and a seasoned team of domain-specific experts to provide real-world, hands-on guidance in moving from innovative concept to successful venture.

Coast Capital Savings Innovation Hub, a program base at the UBC Sauder Centre for Social Innovation & Impact Investing focused on developing ventures with the goal of creating positive social and environmental impact.

About the Creative Destruction Lab in Toronto:

The Creative Destruction Lab leverages the Rotman School’s leading faculty and industry network as well as its location in the heart of Canada’s business capital to accelerate massively scalable, technology-based ventures that have the potential to transform our social, industrial, and economic landscape. The Lab has had a material impact on many nascent startups, including Deep Genomics, Greenlid, Atomwise, Bridgit, Kepler Communications, Nymi, NVBots, OTI Lumionics, PUSH, Thalmic Labs, Vertical.ai, Revlo, Validere, Growsumo, and VoteCompass, among others. For more information, visit www.creativedestructionlab.com

About the UBC Sauder School of Business

The UBC Sauder School of Business is committed to developing transformational and responsible business leaders for British Columbia and the world. Located in Vancouver, Canada’s gateway to the Pacific Rim, the school is distinguished for its long history of partnership and engagement in Asia, the excellence of its graduates, and the impact of its research which ranks in the top 20 globally. For more information, visit www.sauder.ubc.ca

About the Rotman School of Management

The Rotman School of Management is located in the heart of Canada’s commercial and cultural capital and is part of the University of Toronto, one of the world’s top 20 research universities. The Rotman School fosters a new way to think that enables graduates to tackle today’s global business and societal challenges. For more information, visit www.rotman.utoronto.ca.

It’s good to see a couple of successful (according to the news release) local entrepreneurs on the board although I’m somewhat puzzled by Mallett’s presence since, if memory serves, Yahoo! was not doing that well when he left in 2002. The company was an early success but utterly dwarfed by Google at some point in the early 2000s and these days, its stock (both financial and social) has continued to drift downwards. As for Mallett’s current successes, there is no mention of them.

Reuters Top 100 of the world’s most innovative universities

After reading or skimming through the CDL-West news you might think that the University of Toronto ranked higher than UBC on the Reuters list of the world’s most innovative universities. Before breaking the news about the Canadian rankings, here’s more about the list from a Sept, 28, 2016 Reuters news release (receive via email),

Stanford University, the Massachusetts Institute of Technology and Harvard University top the second annual Reuters Top 100 ranking of the world’s most innovative universities. The Reuters Top 100 ranking aims to identify the institutions doing the most to advance science, invent new technologies and help drive the global economy. Unlike other rankings that often rely entirely or in part on subjective surveys, the ranking uses proprietary data and analysis tools from the Intellectual Property & Science division of Thomson Reuters to examine a series of patent and research-related metrics, and get to the essence of what it means to be truly innovative.

In the fast-changing world of science and technology, if you’re not innovating, you’re falling behind. That’s one of the key findings of this year’s Reuters 100. The 2016 results show that big breakthroughs – even just one highly influential paper or patent – can drive a university way up the list, but when that discovery fades into the past, so does its ranking. Consistency is key, with truly innovative institutions putting out groundbreaking work year after year.

Stanford held fast to its first place ranking by consistently producing new patents and papers that influence researchers elsewhere in academia and in private industry. Researchers at the Massachusetts Institute of Technology (ranked #2) were behind some of the most important innovations of the past century, including the development of digital computers and the completion of the Human Genome Project. Harvard University (ranked #3), is the oldest institution of higher education in the United States, and has produced 47 Nobel laureates over the course of its 380-year history.

Some universities saw significant movement up the list, including, most notably, the University of Chicago, which jumped from #71 last year to #47 in 2016. Other list-climbers include the Netherlands’ Delft University of Technology (#73 to #44) and South Korea’s Sungkyunkwan University (#66 to #46).

The United States continues to dominate the list, with 46 universities in the top 100; Japan is once again the second best performing country, with nine universities. France and South Korea are tied in third, each with eight. Germany has seven ranked universities; the United Kingdom has five; Switzerland, Belgium and Israel have three; Denmark, China and Canada have two; and the Netherlands and Singapore each have one.

You can find the rankings here (scroll down about 75% of the way) and for the impatient, the University of British Columbia ranked 50th and the University of Toronto 57th.

The biggest surprise for me was that China, like Canada, had two universities on the list. I imagine that will change as China continues its quest for science and innovation dominance. Given how they tout their innovation prowess, I had one other surprise, the University of Waterloo’s absence.

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Cooling the skin with plastic clothing

Rather that cooling or heating an entire room, why not cool or heat the person? Engineers at Stanford University (California, US) have developed a material that helps with half of that premise: cooling. From a Sept. 1, 2016 news item on ScienceDaily,

Stanford engineers have developed a low-cost, plastic-based textile that, if woven into clothing, could cool your body far more efficiently than is possible with the natural or synthetic fabrics in clothes we wear today.

Describing their work in Science, the researchers suggest that this new family of fabrics could become the basis for garments that keep people cool in hot climates without air conditioning.

“If you can cool the person rather than the building where they work or live, that will save energy,” said Yi Cui, an associate professor of materials science and engineering and of photon science at Stanford.

A Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate, which originated the news item, further explains the information in the video,

This new material works by allowing the body to discharge heat in two ways that would make the wearer feel nearly 4 degrees Fahrenheit cooler than if they wore cotton clothing.

The material cools by letting perspiration evaporate through the material, something ordinary fabrics already do. But the Stanford material provides a second, revolutionary cooling mechanism: allowing heat that the body emits as infrared radiation to pass through the plastic textile.

All objects, including our bodies, throw off heat in the form of infrared radiation, an invisible and benign wavelength of light. Blankets warm us by trapping infrared heat emissions close to the body. This thermal radiation escaping from our bodies is what makes us visible in the dark through night-vision goggles.

“Forty to 60 percent of our body heat is dissipated as infrared radiation when we are sitting in an office,” said Shanhui Fan, a professor of electrical engineering who specializes in photonics, which is the study of visible and invisible light. “But until now there has been little or no research on designing the thermal radiation characteristics of textiles.”

Super-powered kitchen wrap

To develop their cooling textile, the Stanford researchers blended nanotechnology, photonics and chemistry to give polyethylene – the clear, clingy plastic we use as kitchen wrap – a number of characteristics desirable in clothing material: It allows thermal radiation, air and water vapor to pass right through, and it is opaque to visible light.

The easiest attribute was allowing infrared radiation to pass through the material, because this is a characteristic of ordinary polyethylene food wrap. Of course, kitchen plastic is impervious to water and is see-through as well, rendering it useless as clothing.

The Stanford researchers tackled these deficiencies one at a time.

First, they found a variant of polyethylene commonly used in battery making that has a specific nanostructure that is opaque to visible light yet is transparent to infrared radiation, which could let body heat escape. This provided a base material that was opaque to visible light for the sake of modesty but thermally transparent for purposes of energy efficiency.

They then modified the industrial polyethylene by treating it with benign chemicals to enable water vapor molecules to evaporate through nanopores in the plastic, said postdoctoral scholar and team member Po-Chun Hsu, allowing the plastic to breathe like a natural fiber.

Making clothes

That success gave the researchers a single-sheet material that met their three basic criteria for a cooling fabric. To make this thin material more fabric-like, they created a three-ply version: two sheets of treated polyethylene separated by a cotton mesh for strength and thickness.

To test the cooling potential of their three-ply construct versus a cotton fabric of comparable thickness, they placed a small swatch of each material on a surface that was as warm as bare skin and measured how much heat each material trapped.

“Wearing anything traps some heat and makes the skin warmer,” Fan said. “If dissipating thermal radiation were our only concern, then it would be best to wear nothing.”

The comparison showed that the cotton fabric made the skin surface 3.6 F warmer than their cooling textile. The researchers said this difference means that a person dressed in their new material might feel less inclined to turn on a fan or air conditioner.

The researchers are continuing their work on several fronts, including adding more colors, textures and cloth-like characteristics to their material. Adapting a material already mass produced for the battery industry could make it easier to create products.

“If you want to make a textile, you have to be able to make huge volumes inexpensively,” Cui said.

Fan believes that this research opens up new avenues of inquiry to cool or heat things, passively, without the use of outside energy, by tuning materials to dissipate or trap infrared radiation.

“In hindsight, some of what we’ve done looks very simple, but it’s because few have really been looking at engineering the radiation characteristics of textiles,” he said.

Dexter Johnson (Nanoclast blog on the IEEE [Institute of Electrical and Electronics Engineers] website) has written a Sept. 2, 2016 posting where he provides more technical detail about this work,

The nanoPE [nanoporous polyethylene] material is able to achieve this release of the IR heat because of the size of the interconnected pores. The pores can range in size from 50 to 1000 nanometers. They’re therefore comparable in size to wavelengths of visible light, which allows the material to scatter that light. However, because the pores are much smaller than the wavelength of infrared light, the nanoPE is transparent to the IR.

It is this combination of blocking visible light and allowing IR to pass through that distinguishes the nanoPE material from regular polyethylene, which allows similar amounts of IR to pass through, but can only block 20 percent of the visible light compared to nanoPE’s 99 percent opacity.

The Stanford researchers were also able to improve on the water wicking capability of the nanoPE material by using a microneedle punching technique and coating the material with a water-repelling agent. The result is that perspiration can evaporate through the material unlike with regular polyethylene.

For those who wish to further pursue their interest, Dexter has a lively writing style and he provides more detail and insight in his posting.

Here’s a link to and a citation for the paper,

Radiative human body cooling by nanoporous polyethylene textile by Po-Chun Hsu, Alex Y. Song, Peter B. Catrysse, Chong Liu, Yucan Peng, Jin Xie, Shanhui Fan, Yi Cui. Science  02 Sep 2016: Vol. 353, Issue 6303, pp. 1019-1023 DOI: 10.1126/science.aaf5471

This paper is open access.

Oily nanodiamonds

Nanodiamonds if successfully extracted from oil could be used for imaging and communications and the world’s leading program for extracting nanodiamonds (also known as diamondoids) is in California (US). From a May 12, 2016 news item on Nanowerk,

Stanford and SLAC National Accelerator Laboratory jointly run the world’s leading program for isolating and studying diamondoids — the tiniest possible specks of diamond. Found naturally in petroleum fluids, these interlocking carbon cages weigh less than a billionth of a billionth of a carat (a carat weighs about the same as 12 grains of rice); the smallest ones contain just 10 atoms.

Over the past decade, a team led by two Stanford-SLAC faculty members — Nick Melosh, an associate professor of materials science and engineering and of photon science, and Zhi-Xun Shen, a professor of photon science and of physics and applied physics – has found potential roles for diamondoids in improving electron microscope images, assembling materials and printing circuits on computer chips. The team’s work takes place within SIMES, the Stanford Institute for Materials and Energy Sciences, which is run jointly with SLAC.

Close-up of purified diamondoids on a lab bench. Too small to see with the naked eye, diamondoids are visible only when they clump together in fine, sugar-like crystals like these. Photo: Christopher Smith, SLAC National Accelerator Laboratory

Close-up of purified diamondoids on a lab bench. Too small to see with the naked eye, diamondoids are visible only when they clump together in fine, sugar-like crystals like these. Photo: Christopher Smith, SLAC National Accelerator Laboratory

A March 31, 2016 Stanford University news release by Glennda Chui, which originated the news item, describes the work in more detail,

Before they can do that [use nanodiamonds in imaging and other applications], though, just getting the diamondoids is a technical feat. It starts at the nearby Chevron refinery in Richmond, California, with a railroad tank car full of crude oil from the Gulf of Mexico. “We analyzed more than a thousand oils from around the world to see which had the highest concentrations of diamondoids,” says Jeremy Dahl, who developed key diamondoid isolation techniques with fellow Chevron researcher Robert Carlson before both came to Stanford — Dahl as a physical science research associate and Carlson as a visiting scientist.

The original isolation steps were carried out at the Chevron refinery, where the selected crudes were boiled in huge pots to concentrate the diamondoids. Some of the residue from that work came to a SLAC lab, where small batches are repeatedly boiled to evaporate and isolate molecules of specific weights. These fluids are then forced at high pressure through sophisticated filtration systems to separate out diamondoids of different sizes and shapes, each of which has different properties.

The diamondoids themselves are invisible to the eye; the only reason we can see them is that they clump together in fine, sugar-like crystals. “If you had a spoonful,” Dahl says, holding a few in his palm, “you could give 100 billion of them to every person on Earth and still have some left over.”

Recently, the team started using diamondoids to seed the growth of flawless, nano-sized diamonds in a lab at Stanford. By introducing other elements, such as silicon or nickel, during the growing process, they hope to make nanodiamonds with precisely tailored flaws that can produce single photons of light for next-generation optical communications and biological imaging.

Early results show that the quality of optical materials grown from diamondoid seeds is consistently high, says Stanford’s Jelena Vuckovic, a professor of electrical engineering who is leading this part of the research with Steven Chu, professor of physics and of molecular and cellular physiology.

“Developing a reliable way of growing the nanodiamonds is critical,” says Vuckovic, who is also a member of Stanford Bio-X. “And it’s really great to have that source and the grower right here at Stanford. Our collaborators grow the material, we characterize it and we give them feedback right away. They can change whatever we want them to change.”

The song is you: a McGill University, University of Cambridge, and Stanford University research collaboration

These days I’m thinking about sound, music, spoken word, and more as I prepare for a new art/science piece. It’s very early stages so I don’t have much more to say about it but along those lines of thought, there’s a recent piece of research on music and personality that caught my eye. From a May 11, 2016 news item on phys.org,

A team of scientists from McGill University, the University of Cambridge, and Stanford Graduate School of Business developed a new method of coding and categorizing music. They found that people’s preference for these musical categories is driven by personality. The researchers say the findings have important implications for industry and health professionals.

A May 10, 2016 McGill University news release, which originated the news item, provides some fascinating suggestions for new categories for music,

There are a multitude of adjectives that people use to describe music, but in a recent study to be published this week in the journal Social Psychological and Personality Science, researchers show that musical attributes can be grouped into three categories. Rather than relying on the genre or style of a song, the team of scientists led by music psychologist David Greenberg with the help of Daniel J. Levitin from McGill University mapped the musical attributes of song excerpts from 26 different genres and subgenres, and then applied a statistical procedure to group them into clusters. The study revealed three clusters, which they labeled Arousal, Valence, and Depth. Arousal describes intensity and energy in music; Valence describes the spectrum of emotions in music (from sad to happy); and Depth describes intellect and sophistication in music. They also found that characteristics describing music from a single genre (both rock and jazz separately) could be grouped in these same three categories.

The findings suggest that this may be a useful alternative to grouping music into genres, which is often based on social connotations rather than the attributes of the actual music. It also suggests that those in academia and industry (e.g. Spotify and Pandora) that are already coding music on a multitude of attributes might save time and money by coding music around these three composite categories instead.

The researchers also conducted a second study of nearly 10,000 Facebook users who indicated their preferences for 50 musical excerpts from different genres. The researchers were then able to map preferences for these three attribute categories onto five personality traits and 30 detailed personality facets. For example, they found people who scored high on Openness to Experience preferred Depth in music, while Extraverted excitement-seekers preferred high Arousal in music. And those who scored high on Neuroticism preferred negative emotions in music, while those who were self-assured preferred positive emotions in music. As the title from the old Kern and Hammerstein song suggests, “The Song is You”. That is, the musical attributes that you like most reflect your personality. It also provides scientific support for what Joni Mitchell said in a 2013 interview with the CBC: “The trick is if you listen to that music and you see me, you’re not getting anything out of it. If you listen to that music and you see yourself, it will probably make you cry and you’ll learn something about yourself and now you’re getting something out of it.”

The researchers hope that this information will not only be helpful to music therapists but also for health care professions and even hospitals. For example, recent evidence has showed that music listening can increase recovery after surgery. The researchers argue that information about music preferences and personality could inform a music listening protocol after surgery to boost recovery rates.

The article is another in a series of studies that Greenberg and his team have published on music and personality. This past July [2015], they published an article in PLOS ONE showing that people’s musical preferences are linked to thinking styles. And in October [2015], they published an article in the Journal of Research in Personality, identifying the personality trait Openness to Experience as a key predictor of musical ability, even in non-musicians. These series of studies tell us that there are close links between our personality and musical behavior that may be beyond our control and awareness.

Readers can find out how they score on the music and personality quizzes at www.musicaluniverse.org.

David M. Greenberg, lead author from Cambridge University and City University of New York said: “Genre labels are informative but we’re trying to transcend them and move in a direction that points to the detailed characteristics in music that are driving people preferences and emotional reactions.”

Greenberg added: “As a musician, I see how vast the powers of music really are, and unfortunately, many of us do not use music to its full potential. Our ultimate goal is to create science that will help enhance the experience of listening to music. We want to use this information about personality and preferences to increase the day-to-day enjoyment and peak experiences people have with music.”

William Hoffman in a May 11, 2016 article for Inverse describes the work in connection with recently released new music from Radiohead and an upcoming release from Chance the Rapper (along with a brief mention of Drake), Note: Links have been removed,

Music critics regularly scour Thesaurus.com for the best adjectives to throw into their perfectly descriptive melodious disquisitions on the latest works from Drake, Radiohead, or whomever. And listeners of all walks have, since the beginning of music itself, been guilty of lazily pigeonholing artists into numerous socially constructed genres. But all of that can be (and should be) thrown out the window now, because new research suggests that, to perfectly match music to a listener’s personality, all you need are these three scientific measurables [arousal, valence, depth].

This suggests that a slow, introspective gospel song from Chance The Rapper’s upcoming album could have the same depth as a track from Radiohead’s A Moon Shaped Pool. So a system of categorization based on Greenberg’s research would, surprisingly but rightfully, place the rap and rock works in the same bin.

Here’s a link to and a citation for the latest paper,

The Song Is You: Preferences for Musical Attribute Dimensions Reflect Personality by David M. Greenberg, Michal Kosinski, David J. Stillwell, Brian L. Monteiro, Daniel J. Levitin, and Peter J. Rentfrow. Social Psychological and Personality Science, 1948550616641473, first published on May 9, 2016

This paper is behind a paywall.

Here’s a link to and a citation for the October 2015 paper

Personality predicts musical sophistication by David M. Greenberg, Daniel Müllensiefen, Michael E. Lamb, Peter J. Rentfrow. Journal of Research in Personality Volume 58, October 2015, Pages 154–158 doi:10.1016/j.jrp.2015.06.002 Note: A Feb. 2016 erratum is also listed.

The paper is behind a paywall and it looks as if you will have to pay for it and for the erratum separately.

Here’s a link to and a citation for the July 2015 paper,

Musical Preferences are Linked to Cognitive Styles by David M. Greenberg, Simon Baron-Cohen, David J. Stillwell, Michal Kosinski, Peter J. Rentfrow. PLOS [Public Library of Science ONE]  http://dx.doi.org/10.1371/journal.pone.0131151 Published: July 22, 2015

This paper is open access.

I tried out the research project’s website: The Musical Universe. by filling out the Musical Taste questionnaire. Unfortunately, I did not receive my results. Since the team’s latest research has just been reported, I imagine there are many people trying do the same thing. It might be worth your while to wait a bit if you want to try this out or you can fill out one of their other questionnaires. Oh, and you might want to allot at least 20 mins.

Split some water molecules and save solar and wind (energy) for a future day

Professor Ted Sargent’s research team at the University of Toronto has a developed a new technique for saving the energy harvested by sun and wind farms according to a March 28, 2016 news item on Nanotechnology Now,

We can’t control when the wind blows and when the sun shines, so finding efficient ways to store energy from alternative sources remains an urgent research problem. Now, a group of researchers led by Professor Ted Sargent at the University of Toronto’s Faculty of Applied Science & Engineering may have a solution inspired by nature.

The team has designed the most efficient catalyst for storing energy in chemical form, by splitting water into hydrogen and oxygen, just like plants do during photosynthesis. Oxygen is released harmlessly into the atmosphere, and hydrogen, as H2, can be converted back into energy using hydrogen fuel cells.

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

Discovering a better way of storing energy from solar and wind farms is “one of the grand challenges in this field,” Ted Sargent says (photo above by Megan Rosenbloom via flickr) Courtesy: University of Toronto

A March 24, 2016 University of Toronto news release by Marit Mitchell, which originated the news item, expands on the theme,

“Today on a solar farm or a wind farm, storage is typically provided with batteries. But batteries are expensive, and can typically only store a fixed amount of energy,” says Sargent. “That’s why discovering a more efficient and highly scalable means of storing energy generated by renewables is one of the grand challenges in this field.”

You may have seen the popular high-school science demonstration where the teacher splits water into its component elements, hydrogen and oxygen, by running electricity through it. Today this requires so much electrical input that it’s impractical to store energy this way — too great proportion of the energy generated is lost in the process of storing it.

This new catalyst facilitates the oxygen-evolution portion of the chemical reaction, making the conversion from H2O into O2 and H2 more energy-efficient than ever before. The intrinsic efficiency of the new catalyst material is over three times more efficient than the best state-of-the-art catalyst.

Details are offered in the news release,

The new catalyst is made of abundant and low-cost metals tungsten, iron and cobalt, which are much less expensive than state-of-the-art catalysts based on precious metals. It showed no signs of degradation over more than 500 hours of continuous activity, unlike other efficient but short-lived catalysts. …

“With the aid of theoretical predictions, we became convinced that including tungsten could lead to a better oxygen-evolving catalyst. Unfortunately, prior work did not show how to mix tungsten homogeneously with the active metals such as iron and cobalt,” says one of the study’s lead authors, Dr. Bo Zhang … .

“We invented a new way to distribute the catalyst homogenously in a gel, and as a result built a device that works incredibly efficiently and robustly.”

This research united engineers, chemists, materials scientists, mathematicians, physicists, and computer scientists across three countries. A chief partner in this joint theoretical-experimental studies was a leading team of theorists at Stanford University and SLAC National Accelerator Laboratory under the leadership of Dr. Aleksandra Vojvodic. The international collaboration included researchers at East China University of Science & Technology, Tianjin University, Brookhaven National Laboratory, Canadian Light Source and the Beijing Synchrotron Radiation Facility.

“The team developed a new materials synthesis strategy to mix multiple metals homogeneously — thereby overcoming the propensity of multi-metal mixtures to separate into distinct phases,” said Jeffrey C. Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems at Massachusetts Institute of Technology. “This work impressively highlights the power of tightly coupled computational materials science with advanced experimental techniques, and sets a high bar for such a combined approach. It opens new avenues to speed progress in efficient materials for energy conversion and storage.”

“This work demonstrates the utility of using theory to guide the development of improved water-oxidation catalysts for further advances in the field of solar fuels,” said Gary Brudvig, a professor in the Department of Chemistry at Yale University and director of the Yale Energy Sciences Institute.

“The intensive research by the Sargent group in the University of Toronto led to the discovery of oxy-hydroxide materials that exhibit electrochemically induced oxygen evolution at the lowest overpotential and show no degradation,” said University Professor Gabor A. Somorjai of the University of California, Berkeley, a leader in this field. “The authors should be complimented on the combined experimental and theoretical studies that led to this very important finding.”

Here’s a link to and a citation for the paper,

Homogeneously dispersed, multimetal oxygen-evolving catalysts by Bo Zhang, Xueli Zheng, Oleksandr Voznyy, Riccardo Comin, Michal Bajdich, Max García-Melchor, Lili Han, Jixian Xu, Min Liu, Lirong Zheng, F. Pelayo García de Arquer, Cao Thang Dinh, Fengjia Fan, Mingjian Yuan, Emre Yassitepe, Ning Chen, Tom Regier, Pengfei Liu, Yuhang Li, Phil De Luna, Alyf Janmohamed, Huolin L. Xin, Huagui Yang, Aleksandra Vojvodic, Edward H. Sargent. Science  24 Mar 2016: DOI: 10.1126/science.aaf1525

This paper is behind a paywall.

3D microtopographic scaffolds for transplantation and generation of reprogrammed human neurons

Should this technology prove successful once they start testing on people, the stated goal is to use it for the treatment of human neurodegenerative disorders such as Parkinson’s disease.  But, I can’t help wondering if they might also consider constructing an artificial brain.

Getting back to the 3D scaffolds for neurons, a March 17, 2016 US National Institutes of Health (NIH) news release (also on EurekAlert), makes the announcement,

National Institutes of Health-funded scientists have developed a 3D micro-scaffold technology that promotes reprogramming of stem cells into neurons, and supports growth of neuronal connections capable of transmitting electrical signals. The injection of these networks of functioning human neural cells — compared to injecting individual cells — dramatically improved their survival following transplantation into mouse brains. This is a promising new platform that could make transplantation of neurons a viable treatment for a broad range of human neurodegenerative disorders.

Previously, transplantation of neurons to treat neurodegenerative disorders, such as Parkinson’s disease, had very limited success due to poor survival of neurons that were injected as a solution of individual cells. The new research is supported by the National Institute of Biomedical Imaging and Bioengineering (NIBIB), part of NIH.

“Working together, the stem cell biologists and the biomaterials experts developed a system capable of shuttling neural cells through the demanding journey of transplantation and engraftment into host brain tissue,” said Rosemarie Hunziker, Ph.D., director of the NIBIB Program in Tissue Engineering and Regenerative Medicine. “This exciting work was made possible by the close collaboration of experts in a wide range of disciplines.”

The research was performed by researchers from Rutgers University, Piscataway, New Jersey, departments of Biomedical Engineering, Neuroscience and Cell Biology, Chemical and Biochemical Engineering, and the Child Health Institute; Stanford University School of Medicine’s Institute of Stem Cell Biology and Regenerative Medicine, Stanford, California; the Human Genetics Institute of New Jersey, Piscataway; and the New Jersey Center for Biomaterials, Piscataway. The results are reported in the March 17, 2016 issue of Nature Communications.

The researchers experimented in creating scaffolds made of different types of polymer fibers, and of varying thickness and density. They ultimately created a web of relatively thick fibers using a polymer that stem cells successfully adhered to. The stem cells used were human induced pluripotent stem cells (iPSCs), which can be readily generated from adult cell types such as skin cells. The iPSCs were induced to differentiate into neural cells by introducing the protein NeuroD1 into the cells.

The space between the polymer fibers turned out to be critical. “If the scaffolds were too dense, the stem cell-derived neurons were unable to integrate into the scaffold, whereas if they are too sparse then the network organization tends to be poor,” explained Prabhas Moghe, Ph.D., distinguished professor of biomedical engineering & chemical engineering at Rutgers University and co-senior author of the paper. “The optimal pore size was one that was large enough for the cells to populate the scaffold but small enough that the differentiating neurons sensed the presence of their neighbors and produced outgrowths resulting in cell-to-cell contact. This contact enhances cell survival and development into functional neurons able to transmit an electrical signal across the developing neural network.”

To test the viability of neuron-seeded scaffolds when transplanted, the researchers created micro-scaffolds that were small enough for injection into mouse brain tissue using a standard hypodermic needle. They injected scaffolds carrying the human neurons into brain slices from mice and compared them to human neurons injected as individual, dissociated cells.

The neurons on the scaffolds had dramatically increased cell-survival compared with the individual cell suspensions. The scaffolds also promoted improved neuronal outgrowth and electrical activity. Neurons injected individually in suspension resulted in very few cells surviving the transplant procedure.

Human neurons on scaffolds compared to neurons in solution were then tested when injected into the brains of live mice. Similar to the results in the brain slices, the survival rate of neurons on the scaffold network was increased nearly 40-fold compared to injected isolated cells. A critical finding was that the neurons on the micro-scaffolds expressed proteins that are involved in the growth and maturation of neural synapses–a good indication that the transplanted neurons were capable of functionally integrating into the host brain tissue.

The success of the study gives this interdisciplinary group reason to believe that their combined areas of expertise have resulted in a system with much promise for eventual treatment of human neurodegenerative disorders. In fact, they are now refining their system for specific use as an eventual transplant therapy for Parkinson’s disease. The plan is to develop methods to differentiate the stem cells into neurons that produce dopamine, the specific neuron type that degenerates in individuals with Parkinson’s disease. The work also will include fine-tuning the scaffold materials, mechanics and dimensions to optimize the survival and function of dopamine-producing neurons, and finding the best mouse models of the disease to test this Parkinson’s-specific therapy.

Here’s a link to and a citation for the paper,

Generation and transplantation of reprogrammed human neurons in the brain using 3D microtopographic scaffolds by Aaron L. Carlson, Neal K. Bennett, Nicola L. Francis, Apoorva Halikere, Stephen Clarke, Jennifer C. Moore, Ronald P. Hart, Kenneth Paradiso, Marius Wernig, Joachim Kohn, Zhiping P. Pang, & Prabhas V. Moghe. Nature Communications 7, Article number: 10862  doi:10.1038/ncomms10862 Published 17 March 2016

This paper is open access.

Cambridge University researchers tell us why Spiderman can’t exist while Stanford University proves otherwise

A team of zoology researchers at Cambridge University (UK) find themselves in the unenviable position of having their peer-reviewed study used as a source of unintentional humour. I gather zoologists (Cambridge) and engineers (Stanford) don’t have much opportunity to share information.

A Jan. 18, 2016 news item on ScienceDaily announces the Cambridge research findings,

Latest research reveals why geckos are the largest animals able to scale smooth vertical walls — even larger climbers would require unmanageably large sticky footpads. Scientists estimate that a human would need adhesive pads covering 40% of their body surface in order to walk up a wall like Spiderman, and believe their insights have implications for the feasibility of large-scale, gecko-like adhesives.

A Jan. 18, 2016 Cambridge University press release (also on EurekAlert), which originated the news item, describes the research and the thinking that led to the researchers’ conclusions,

Dr David Labonte and his colleagues in the University of Cambridge’s Department of Zoology found that tiny mites use approximately 200 times less of their total body area for adhesive pads than geckos, nature’s largest adhesion-based climbers. And humans? We’d need about 40% of our total body surface, or roughly 80% of our front, to be covered in sticky footpads if we wanted to do a convincing Spiderman impression.

Once an animal is big enough to need a substantial fraction of its body surface to be covered in sticky footpads, the necessary morphological changes would make the evolution of this trait impractical, suggests Labonte.

“If a human, for example, wanted to walk up a wall the way a gecko does, we’d need impractically large sticky feet – our shoes would need to be a European size 145 or a US size 114,” says Walter Federle, senior author also from Cambridge’s Department of Zoology.

The researchers say that these insights into the size limits of sticky footpads could have profound implications for developing large-scale bio-inspired adhesives, which are currently only effective on very small areas.

“As animals increase in size, the amount of body surface area per volume decreases – an ant has a lot of surface area and very little volume, and a blue whale is mostly volume with not much surface area” explains Labonte.

“This poses a problem for larger climbing species because, when they are bigger and heavier, they need more sticking power to be able to adhere to vertical or inverted surfaces, but they have comparatively less body surface available to cover with sticky footpads. This implies that there is a size limit to sticky footpads as an evolutionary solution to climbing – and that turns out to be about the size of a gecko.”

Larger animals have evolved alternative strategies to help them climb, such as claws and toes to grip with.

The researchers compared the weight and footpad size of 225 climbing animal species including insects, frogs, spiders, lizards and even a mammal.

“We compared animals covering more than seven orders of magnitude in weight, which is roughly the same as comparing a cockroach to the weight of Big Ben, for example,” says Labonte.

These investigations also gave the researchers greater insights into how the size of adhesive footpads is influenced and constrained by the animals’ evolutionary history.

“We were looking at vastly different animals – a spider and a gecko are about as different as a human is to an ant- but if you look at their feet, they have remarkably similar footpads,” says Labonte.

“Adhesive pads of climbing animals are a prime example of convergent evolution – where multiple species have independently, through very different evolutionary histories, arrived at the same solution to a problem. When this happens, it’s a clear sign that it must be a very good solution.”

The researchers believe we can learn from these evolutionary solutions in the development of large-scale manmade adhesives.

“Our study emphasises the importance of scaling for animal adhesion, and scaling is also essential for improving the performance of adhesives over much larger areas. There is a lot of interesting work still to do looking into the strategies that animals have developed in order to maintain the ability to scale smooth walls, which would likely also have very useful applications in the development of large-scale, powerful yet controllable adhesives,” says Labonte.

There is one other possible solution to the problem of how to stick when you’re a large animal, and that’s to make your sticky footpads even stickier.

“We noticed that within closely related species pad size was not increasing fast enough to match body size, probably a result of evolutionary constraints. Yet these animals can still stick to walls,” says Christofer Clemente, a co-author from the University of the Sunshine Coast [Australia].

“Within frogs, we found that they have switched to this second option of making pads stickier rather than bigger. It’s remarkable that we see two different evolutionary solutions to the problem of getting big and sticking to walls,” says Clemente.

“Across all species the problem is solved by evolving relatively bigger pads, but this does not seem possible within closely related species, probably since there is not enough morphological diversity to allow it. Instead, within these closely related groups, pads get stickier. This is a great example of evolutionary constraint and innovation.”

A researcher at Stanford University (US) took strong exception to the Cambridge team’s conclusions , from a Jan. 28, 2016 article by Michael Grothaus for Fast Company (Note: A link has been removed),

It seems the dreams of the web-slinger’s fans were crushed forever—that is until a rival university swooped in and saved the day. A team of engineers working with mechanical engineering graduate student Elliot Hawkes at Stanford University have announced [in 2014] that they’ve invented a device called “gecko gloves” that proves the Cambridge researchers wrong.

Hawkes has created a video outlining the nature of his dispute with Cambridge University and US tv talk show host, Stephen Colbert who featured the Cambridge University research in one of his monologues,

To be fair to Hawkes, he does prove his point. A Nov. 21, 2014 Stanford University report by Bjorn Carey describes Hawke’s ingenious ‘sticky pads,

Each handheld gecko pad is covered with 24 adhesive tiles, and each of these is covered with sawtooth-shape polymer structures each 100 micrometers long (about the width of a human hair).

The pads are connected to special degressive springs, which become less stiff the further they are stretched. This characteristic means that when the springs are pulled upon, they apply an identical force to each adhesive tile and cause the sawtooth-like structures to flatten.

“When the pad first touches the surface, only the tips touch, so it’s not sticky,” said co-author Eric Eason, a graduate student in applied physics. “But when the load is applied, and the wedges turn over and come into contact with the surface, that creates the adhesion force.”

As with actual geckos, the adhesives can be “turned” on and off. Simply release the load tension, and the pad loses its stickiness. “It can attach and detach with very little wasted energy,” Eason said.

The ability of the device to scale up controllable adhesion to support large loads makes it attractive for several applications beyond human climbing, said Mark Cutkosky, the Fletcher Jones Chair in the School of Engineering and senior author on the paper.

“Some of the applications we’re thinking of involve manufacturing robots that lift large glass panels or liquid-crystal displays,” Cutkosky said. “We’re also working on a project with NASA’s Jet Propulsion Laboratory to apply these to the robotic arms of spacecraft that could gently latch on to orbital space debris, such as fuel tanks and solar panels, and move it to an orbital graveyard or pitch it toward Earth to burn up.”

Previous work on synthetic and gecko adhesives showed that adhesive strength decreased as the size increased. In contrast, the engineers have shown that the special springs in their device make it possible to maintain the same adhesive strength at all sizes from a square millimeter to the size of a human hand.

The current version of the device can support about 200 pounds, Hawkes said, but, theoretically, increasing its size by 10 times would allow it to carry almost 2,000 pounds.

Here’s a link to and a citation for the Stanford paper,

Human climbing with efficiently scaled gecko-inspired dry adhesives by Elliot W. Hawkes, Eric V. Eason, David L. Christensen, Mark R. Cutkosky. Jurnal of the Royal Society Interface DOI: 10.1098/rsif.2014.0675 Published 19 November 2014

This paper is open access.

To be fair to the Cambridge researchers, It’s stretching it a bit to say that Hawke’s gecko gloves allow someone to be like Spiderman. That’s a very careful, slow climb achieved in a relatively short period of time. Can the human body remain suspended that way for more than a few minutes? How big do your sticky pads have to be if you’re going to have the same wall-climbing ease of movement and staying power of either a gecko or Spiderman?

Here’s a link to and a citation for the Cambridge paper,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J. Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J. Crosby, Duncan J. Irschick, and Walter Federle. PNAS doi: 10.1073/pnas.1519459113

This paper is behind a paywall but there is an open access preprint version, which may differ from the PNAS version, available,

Extreme positive allometry of animal adhesive pads and the size limits of adhesion-based climbing by David Labonte, Christofer J Clemente, Alex Dittrich, Chi-Yun Kuo, Alfred J Crosby, Duncan J Irschick, Walter Federle. bioRxiv
doi: http://dx.doi.org/10.1101/033845

I hope that if the Cambridge researchers respond, they will be witty rather than huffy. Finally, there’s this gecko image (which I love) from the Cambridge researchers,

 Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Caption: This image shows a gecko and ant. Credit: Image courtesy of A Hackmann and D Labonte

Simon Fraser University (Vancouver, Canada) and its president’s (Andrew Petter) dream colloquium: big data

They have a ‘big data’ start to 2016 planned for the President’s (Andrew Petter at Simon Fraser University [SFU] in Vancouver, Canada) Dream Colloquium according to a Jan. 5, 2016 news release,

Big data explained: SFU launches spring 2016 President’s Dream Colloquium

Speaker series tackles history, use and implications of collecting data


Canadians experience and interact with big data on a daily basis. Some interactions are as simple as buying coffee or as complex as filling out the Canadian government’s mandatory long-form census. But while big data may be one of the most important technological and social shifts in the past five years, many experts are still grappling with what to do with the massive amounts of information being gathered every day.


To help understand the implications of collecting, analyzing and using big data, Simon Fraser University is launching the President’s Dream Colloquium on Engaging Big Data on Tuesday, January 5.


“Big data affects all sectors of society from governments to businesses to institutions to everyday people,” says Peter Chow-White, SFU Associate Professor of Communication. “This colloquium brings together people from industry and scholars in computing and social sciences in a dialogue around one of the most important innovations of our time next to the Internet.”


This spring marks the first President’s Dream Colloquium where all faculty and guest lectures will be available to the public. The speaker series will give a historical overview of big data, specific case studies in how big data is used today and discuss what the implications are for this information’s usage in business, health and government in the future.


The series includes notable guest speakers such as managing director of Microsoft Research, Surajit Chaudhuri, and Tableau co-founder Pat Hanrahan.  


“Pat Hanrahan is a leader in a number of sectors and Tableau is a leader in accessing big data through visual analytics,” says Chow-White. “Rather than big data being available to only a small amount of professionals, Tableau makes it easier for everyday people to access and understand it in a visual way.”


The speaker series is free to attend with registration. Lectures will be webcast live and available on the President’s Dream Colloquium website.



  • By 2020, over 1/3 of all data will live in or pass through the cloud.
  • Data production will be 44 times greater in 2020 than it was in 2009.
  • More than 70 percent of the digital universe is generated by individuals. But enterprises have responsibility for the storage, protection and management of 80 percent of that.

(Statistics provided by CSC)




The course features lectures from notable guest speakers including:

  • Sasha Issenberg, Author and Journalist
    Tuesday, January 12, 2016
  • Surajit ChaudhuriScientist and Managing Director of XCG (Microsoft Research)
    Tuesday, January 19, 2016
  • Pat Hanrahan, Professor at the Stanford Computer Graphics Laboratory, Cofounder and Chief Scientist of Tableau, Founding member of Pixar
    Wednesday, February 3, 2016
  • Sheelagh Carpendale, Professor of Computing Science University of Calgary, Canada Research Chair in Information Visualization
    Tuesday, February 23, 2016, 3:30pm
  • Colin HillCEO of GNS Healthcare
    Tuesday, March 8, 2016
  • Chad Skelton, Award-winning Data Journalist and Consultant
    Tuesday, March 22, 2016

Not to worry, even though the first talk with Sasha Issenberg and Mark Pickup (strangely, he’s [Pickup is an SFU professor of political science] not mentioned in the news release or on the event page) has taken place, a webcast is being posted to the event page here.

I watched the first event live (via a livestream webcast which I accessed by clicking on the link found on the Event’s Speaker’s page) and found it quite interesting although I’m not sure about asking Issenberg to speak extemporaneously. He rambled and offered more detail about things that don’t matter much to a Canadian audience. I couldn’t tell if part of the problem might lie with the fact that his ‘big data’ book (The Victory Lab: The Secret Science of Winning Campaigns) was published a while back and he’s since published one on medical tourism and is about to publish one on same sex marriages and the LGBTQ communities in the US. As someone else who moves from topic to topic, I know it’s an effort to ‘go back in time’ and to remember the details and to recapture the enthusiasm that made the piece interesting.  Also, he has yet to get the latest scoop on big data and politics in the US as embarking on the 2016 campaign trail won’t take place until sometime later in January.

So, thanks to Issenberg for managing to dredge up as much as he did. Happily, he did recognize that there are differences between Canada and the US and the type of election data that is gathered and other data that can accessed. He provided a capsule version of the data situation in the US where they can identify individuals and predict how they might vote, while Pickup focused on the Canadian scene. As one expects from Canadian political parties and Canadian agencies in general, no one really wants to share how much information they can actually access (yes, that’s true of the Liberals and the NDP [New Democrats] too). By contrast, political parties and strategists in the US quite openly shared information with Issenberg about where and how they get data.

Pickup made some interesting points about data and how more data does not lead to better predictions. There was one study done on psychologists which Pickup replicated with undergraduate political science students. The psychologists and the political science students in the two separate studies were given data and asked to predict behaviour. They were then given more data about the same individuals and asked again to predict behaviour. In all. there were four sessions where the subjects were given successively more data and asked to predict behaviour based on that data. You may have already guessed but prediction accuracy decreased each time more information was added. Conversely, the people making the predictions became more confident as their predictive accuracy declined. A little disconcerting, non?

Pickup made another point noting that it may be easier to use big data to predict voting behaviour in a two-party system such as they have in the US but a multi-party system such as we have in Canada offers more challenges.

So, it was a good beginning and I look forward to more in the coming weeks (President’s Dream Colloquium on Engaging Big Data). Remember if you can’t listen to the live session, just click through to the event’s speaker’s page where they have hopefully posted the webcast.

The next dream colloquium takes place Tuesday, Jan. 19, 2016,

Big Data since 1854

Dr. Surajit Chaudhuri, Scientist and Managing Director of XCG (Microsoft Research)
Standford University, PhD
Tuesday, January 19, 2016, 3:30–5 pm
IRMACS Theatre, ASB 10900, Burnaby campus [or by webcast[