Tag Archives: University of Southern California

Art and science, Salk Institute and Los Angeles County Museum of Art (LACMA), to study museum visitor behaviour

The Salk Institute wouldn’t have been my first guess for the science partner in this art and science project, which will be examining museum visitor behaviour. From the September 28, 2022 Salk Institute news release (also on EurekAlert and a copy received via email) announcing the project grant,

Clay vessels of innumerable shapes and sizes come to life as they illuminate a rich history of symbolic meanings and identity. Some museum visitors may lean in to get a better view, while others converse with their friends over the rich hues. Exhibition designers have long wondered how the human brain senses, perceives, and learns in the rich environment of a museum gallery.

In a synthesis of science and art, Salk scientists have teamed up with curators and design experts at the Los Angeles County Museum of Art (LACMA) to study how nearly 100,000 museum visitors respond to exhibition design. The goal of the project, funded by a $900,000 grant from the National Science Foundation, is to better understand how people perceive, make choices in, interact with, and learn from a complex environment, and to further enhance the educational mission of museums through evidence-based design strategies.   

The Salk team is led by Professor Thomas Albright, Salk Fellow Talmo Pereira, and Staff Scientist Sergei Gepshtein.

The experimental exhibition at LACMA—called “Conversing in Clay: Ceramics from the LACMA Collection”—is open until May 21, 2023.

“LACMA is one of the world’s greatest art museums, so it is wonderful to be able to combine its expertise with our knowledge of brain function and behavior,” says Albright, director of Salk’s Vision Center Laboratory and Conrad T. Prebys Chair in Vision Research. “The beauty of this project is that it extends our laboratory research on perception, memory, and decision-making into the real world.”

Albright and Gepshtein study the visual system and how it informs decisions and behaviors. A major focus of their work is uncovering how perception guides movement in space. Pereira’s expertise lies in measuring and quantifying behaviors. He invented a deep learning technique called SLEAP [Social LEAP Estimates Animal Poses (SLEAP)], which precisely captures the movements of organisms, from single cells to whales, using conventional videography. This technology has enabled scientists to describe behaviors with unprecedented precision.

For this project, the scientists have placed 10 video cameras throughout a LACMA gallery. The researchers will record how the museum environment shapes behaviors as visitors move through the space, including preferred viewing locations, paths and rates of movement, postures, social interactions, gestures, and expressions. Those behaviors will, in turn, provide novel insights into the underlying perceptual and cognitive processes that guide our engagement with works of art. The scientists will also test strategic modifications to gallery design to produce the most rewarding experience.

“We plan to capture every behavior that every person does while visiting the exhibit,” Pereira says. “For example, how long they stand in front an object, whether they’re talking to a friend or even scratching their head. Then we can use this data to predict how the visitor will act next, such as if they will visit another object in the exhibit or if they leave instead.”

Results from the study will help inform future exhibit design and visitor experience and provide an unprecedented quantitative look at how human systems for perception and memory lead to predictable decisions and actions in a rich sensory environment.

“As a museum that has a long history of melding art with science and technology, we are thrilled to partner with the Salk Institute for this study,” says Michael Govan, LACMA CEO and Wallis Annenberg director. “LACMA is always striving to create accessible, engaging gallery environments for all visitors. We look forward to applying what we learn to our approach to gallery design and to enhance visitor experience.” 

Next, the scientists plan to employ this experimental approach to gain a better understanding of how the design of environments for people with specific needs, like school-age children or patients with dementia, might improve cognitive processes and behaviors.

Several members of the research team are also members of the Academy of Neuroscience for Architecture, which seeks to promote and advance knowledge that links neuroscience research to a growing understanding of human responses to the built environment.

Gepshtein is also a member of Salk’s Center for the Neurobiology of Vision and director of the Collaboratory for Adaptive Sensory Technologies. Additionally, he serves as the director of the Center for Spatial Perception & Concrete Experience at the University of Southern California.

About the Los Angeles County Museum of Art:

LACMA is the largest art museum in the western United States, with a collection of more than 149,000 objects that illuminate 6,000 years of artistic expression across the globe. Committed to showcasing a multitude of art histories, LACMA exhibits and interprets works of art from new and unexpected points of view that are informed by the region’s rich cultural heritage and diverse population. LACMA’s spirit of experimentation is reflected in its work with artists, technologists, and thought leaders as well as in its regional, national, and global partnerships to share collections and programs, create pioneering initiatives, and engage new audiences.

About the Salk Institute for Biological Studies:

Every cure has a starting point. The Salk Institute embodies Jonas Salk’s mission to dare to make dreams into reality. Its internationally renowned and award-winning scientists explore the very foundations of life, seeking new understandings in neuroscience, genetics, immunology, plant biology, and more. The Institute is an independent nonprofit organization and architectural landmark: small by choice, intimate by nature, and fearless in the face of any challenge. Be it cancer or Alzheimer’s, aging or diabetes, Salk is where cures begin. Learn more at: salk.edu.

I find this image quite intriguing,

Caption: Motion capture technology is used to classify human behavior in an art exhibition. Credit: Salk Institute

I’m trying to figure out how they’ll do this. Will each visitor be ‘tagged’ as they enter the LACMA gallery so they can be ‘followed’ individually as they respond (or don’t respond) to the exhibits? Will they be notified that they are participating in a study?

I was tracked without my knowledge or consent at the Vancouver (Canada) Art Gallery’s (VAG) exhibition, “The Imitation Game: Visual Culture in the Age of Artificial Intelligence” (March 5, 2022 – October 23, 2022). It was disconcerting to find out that my ‘tracks’ had become part of a real time installation. (The result of my trip to the VAG was a two-part commentary: “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver [Canada] Art Gallery [1 of 2]: The Objects” and “Mad, bad, and dangerous to know? Artificial Intelligence at the Vancouver [Canada] Art Gallery [2 of 2]: Meditations”. My response to the experience can be found under the ‘Eeek’ subhead of part 2: Meditations. For the curious, part 1: The Objects is here.)

Putting science back into pop culture and selling books

Clifford V. Johnson is very good at promoting books. I tip my hat to him; that’s an excellent talent to have, especially when you’ve written a book, in his case, it’s a graphic novel titled ‘The Dialogues: Conversations about the Nature of the Universe‘.

I first stumbled across professor (University of Southern California) and physicist Johnson and his work in this January 18, 2018 news item on phys.org,

How often do you, outside the requirements of an assignment, ponder things like the workings of a distant star, the innards of your phone camera, or the number and layout of petals on a flower? Maybe a little bit, maybe never. Too often, people regard science as sitting outside the general culture: A specialized, difficult topic carried out by somewhat strange people with arcane talents. It’s somehow not for them.

But really science is part of the wonderful tapestry of human culture, intertwined with things like art, music, theater, film and even religion. These elements of our culture help us understand and celebrate our place in the universe, navigate it and be in dialogue with it and each other. Everyone should be able to engage freely in whichever parts of the general culture they choose, from going to a show or humming a tune to talking about a new movie over dinner.

Science, though, gets portrayed as opposite to art, intuition and mystery, as though knowing in detail how that flower works somehow undermines its beauty. As a practicing physicist, I disagree. Science can enhance our appreciation of the world around us. It should be part of our general culture, accessible to all. Those “special talents” required in order to engage with and even contribute to science are present in all of us.

Here’s more his January 18, 2018 essay on The Conversation (which was the origin for the news item), Note: Links have been removed,

… in addition to being a professor, I work as a science advisor for various forms of entertainment, from blockbuster movies like the recent “Thor: Ragnarok,” or last spring’s 10-hour TV dramatization of the life and work of Albert Einstein (“Genius,” on National Geographic), to the bestselling novel “Dark Matter,” by Blake Crouch. People spend a lot of time consuming entertainment simply because they love stories like these, so it makes sense to put some science in there.

Science can actually help make storytelling more entertaining, engaging and fun – as I explain to entertainment professionals every chance I get. From their perspective, they get potentially bigger audiences. But good stories, enhanced by science, also spark valuable conversations about the subject that continue beyond the movie theater.
Science can be one of the topics woven into the entertainment we consume – via stories, settings and characters. ABC Television

Nonprofit organizations have been working hard on this mission. The Alfred P. Sloan Foundation helps fund and develop films with science content – “The Man Who Knew Infinity” (2015) and “Robot & Frank” (2012) are two examples. (The Sloan Foundation is also a funding partner of The Conversation US.)

The National Academy of Sciences set up the Science & Entertainment Exchange to help connect people from the entertainment industry to scientists. The idea is that such experts can provide Hollywood with engaging details and help with more accurate portrayals of scientists that can enhance the narratives they tell. Many of the popular Marvel movies – including “Thor” (2011), “Ant-Man” (2015) and the upcoming “Avengers: Infinity War” – have had their content strengthened in this way.

Encouragingly, a recent Pew Research Center survey in the U.S. showed that entertainment with science or related content is watched by people across “all demographic, educational and political groups,” and that overall they report positive impressions of the science ideas and scenarios contained in them.

Many years ago I realized it is hard to find books on the nonfiction science shelf that let readers see themselves as part of the conversation about science. So I envisioned an entire book of conversations about science taking place between ordinary people. While “eavesdropping” on those conversations, readers learn some science ideas, and are implicitly invited to have conversations of their own. It’s a resurrection of the dialogue form, known to the ancient Greeks, and to Galileo, as a device for exchanging ideas, but with contemporary settings: cafes, restaurants, trains and so on.

Clifford Johnson at his drafting table. Clifford V. Johnson, CC BY-ND

So over six years I taught myself the requisite artistic and other production techniques, and studied the language and craft of graphic narratives. I wrote and drew “The Dialogues: Conversations About the Nature of the Universe” as proof of concept: A new kind of nonfiction science book that can inspire more people to engage in their own conversations about science, and celebrate a spirit of plurality in everyday science participation.

I so enjoyed Johnson’s writing and appreciated how he introduced his book into the piece that I searched for more and found a three-part interview with Henry Jenkins on his Confessions of an Aca-Fan (Academic-Fan) blog. Before moving onto the interview, here’s some information about the interviewer, Henry Jenkins, (Note: Links have been removed),

Henry Jenkins is the Provost Professor of Communication, Journalism, Cinematic Arts and Education at the University of Southern California. He arrived at USC in Fall 2009 after spending more than a decade as the Director of the MIT Comparative Media Studies Program and the Peter de Florez Professor of Humanities. He is the author and/or editor of seventeen books on various aspects of media and popular culture, including Textual Poachers: Television Fans and Participatory Culture, Hop on Pop: The Politics and Pleasures of Popular Culture,  From Barbie to Mortal Kombat: Gender and Computer Games, Convergence Culture: Where Old and New Media Collide, Spreadable Media: Creating Meaning and Value in a Networked Culture, and By Any Media Necessary: The New Youth Activism. He is currently editing a handbook on the civic imagination and writing a book on “comics and stuff”. He has written for Technology Review, Computer Games, Salon, and The Huffington Post.

Jenkins is the principal investigator for The Civic Imagination Project, funded by the MacArthur Foundation, to explore ways to inspire creative collaborations within communities as they work together to identify shared values and visions for the future. This project grew out of the Media, Activism, and Participatory Politics research group, also funded by MacArthur, which did case studies of innovative organizations that have been effective at getting young people involved in the political process. He is also the Chief Advisor to the Annenberg Innovation Lab. Jenkins also serves on the jury that selects the Peabody Awards, which recognizes “stories that matter” from radio, television, and the web.

He has previously worked as the principal investigator for  Project New Media Literacies (NML), a group which originated as part of the MacArthur Digital Media and Learning Initiative. Jenkins wrote a white paper on learning in a participatory culture that has become the springboard for the group’s efforts to develop and test educational materials focused on preparing students for engagement with the new media landscape. He also was the founder for the Convergence Culture Consortium, a faculty network which seeks to build bridges between academic researchers and the media industry in order to help inform the rethinking of consumer relations in an age of participatory culture.  The Consortium lives on today via the Transforming Hollywood conference, run jointly between USC and UCLA, which recently hosted its 8th event.  

While at MIT, he was one of the principal investigators for The Education Arcade, a consortium of educators and business leaders working to promote the educational use of computer and video games. Jenkins also plays a significant role as a public advocate for fans, gamers and bloggers: testifying before the U.S. Senate Commerce Committee investigation into “Marketing Violence to Youth” following the Columbine shootings; advocating for media literacy education before the Federal Communications Commission; calling for a more consumer-oriented approach to intellectual property at a closed door meeting of the governing body of the World Economic Forum; signing amicus briefs in opposition to games censorship;  regularly speaking to the press and other media about aspects of media change and popular culture; and most recently, serving as an expert witness in the legal struggle over the fan-made film, Prelude to Axanar.  He also has served as a consultant on the Amazon children’s series Lost in Oz, where he provided insights on world-building and transmedia strategies as well as new media literacy issues.

Jenkins has a B.A. in Political Science and Journalism from Georgia State University, a M.A. in Communication Studies from the University of Iowa and a PhD in Communication Arts from the University of Wisconsin-Madison.

Well, that didn’t seem so simple after all. For a somewhat more personal account of who I am, read on.

About Me

The first thing you are going to discover about me, oh reader of this blog, is that I am prolific as hell. The second is that I am also long-winded as all get out. As someone famous once said, “I would have written it shorter, but I didn’t have enough time.”

My earliest work centered on television fans – particularly science fiction fans. Part of what drew me into graduate school in media studies was a fascination with popular culture. I grew up reading Mad magazine and Famous Monsters of Filmland – and, much as my parents feared, it warped me for life. Early on, I discovered the joys of comic books and science fiction, spent time playing around with monster makeup, started writing scripts for my own Super 8 movies (The big problem was that I didn’t have access to a camera until much later), and collecting television-themed toys. By the time I went to college, I was regularly attending science fiction conventions. Through the woman who would become my wife, I discovered fan fiction. And we spent a great deal of time debating our very different ways of reading our favorite television series.

When I got to graduate school, I was struck by how impoverished the academic framework for thinking about media spectatorship was – basically, though everyone framed it differently, consumers were assumed to be passive, brainless, inarticulate, and brainwashed. None of this jelled well with my own robust experience of being a fan of popular culture. I was lucky enough to get to study under John Fiske, first at Iowa and then at the University of Wisconsin-Madison, who introduced me to the cultural studies perspective. Fiske was a key advocate of ethnographic audience research, arguing that media consumers had more tricks up their sleeves than most academic theory acknowledged.

Out of this tension between academic theory and fan experience emerged first an essay, “Star Trek Reread, Rerun, Rewritten” and then a book, Textual Poachers: Television Fans and Participatory Culture. Textual Poachers emerged at a moment when fans were still largely marginal to the way mass media was produced and consumed, and still hidden from the view of most “average consumers.” As such, the book represented a radically different way of thinking about how one might live in relation to media texts. In the book, I describe fans as “rogue readers.” What most people took from that book was my concept of “poaching,” the idea that fans construct their own culture – fan fiction, artwork, costumes, music and videos – from content appropriated from mass media, reshaping it to serve their own needs and interests. There are two other key concepts in this early work which takes on greater significance in my work today – the idea of participatory culture (which runs throughout Convergence Culture) and the idea of a moral economy (that is, the presumed ethical norms which govern the relations between media producers and consumers).

As for the interview, here’s Jenkins’ introduction to the series and a portion of part one (from Comics and Popular Science: An Interview with Clifford V. Johnson (Part One) posted on November 15, 2017),

unnamed.jpg

Clifford V. Johnson is the first theoretical physicist who I have ever interviewed for my blog. Given the sharp divide that our society constructs between the sciences and the humanities, he may well be the last, but he would be the first to see this gap as tragic, a consequence of the current configuration of disciplines. Johnson, as I have discovered, is deeply committed to helping us recognize the role that science plays in everyday life, a project he pursues actively through his involvement as one of the leaders of the Los Angeles Institute for the Humanities (of which I am also a member), as a consultant on various film and television projects, and now, as the author of a graphic novel, The Dialogues, which is being released this week. We were both on a panel about contemporary graphic storytelling Tara McPherson organized for the USC Sydney Harmon Institute for Polymathic Study and we’ve continued to bat around ideas about the pedagogical potential of comics ever since.

Here’s what I wrote when I was asked to provide a blurb for his new book:

“Two superheroes walk into a natural history museum — what happens after that will have you thinking and talking for a long time to come. Clifford V. Johnson’s The Dialogues joins a select few examples of recent texts, such as Scott McCloud’s Understanding Comics, Larry Gonick’s Cartoon History of the Universe, Nick Sousanis’s Unflattening, Bryan Talbot’s Alice in Sunderland, or Joe Sacco’s Palestine, which use the affordances of graphic storytelling as pedagogical tools for changing the ways we think about the world around us. Johnson displays a solid grasp of the craft of comics, demonstrating how this medium can be used to represent different understandings of the relationship between time and space, questions central to his native field of physics. He takes advantage of the observational qualities of contemporary graphic novels to explore the place of scientific thinking in our everyday lives.”

To my many readers who care about sequential art, this is a book which should be added to your collection — Johnson makes good comics, smart comics, beautiful comics, and comics which are doing important work, all at the same time. What more do you want!

In the interviews that follows, we explore more fully what motivated this particular comics and how approaching comics as a theoretical physicist has helped him to discover some interesting formal aspects of this medium.

What do you want your readers to learn about science over the course of these exchanges? I am struck by the ways you seek to demystify aspects of the scientific process, including the role of theory, equations, and experimentation.

unnamed-2.jpg

 

That participatory aspect is core, for sure. Conversations about science by random people out there in the world really do happen – I hear them a lot on the subway, or in cafes, and so I wanted to highlight those and celebrate them. So the book becomes a bit of an invitation to everyone to join in. But then I can show so many other things that typically just get left out of books about science: The ordinariness of the settings in which such conversations can take place, the variety of types of people involved, and indeed the main tools, like equations and technical diagrams, that editors usually tell you to leave out for fear of scaring away the audience. …

I looked for book reviews and found two. This first one is from Starburst Magazine, which strangely does not have the date or author listed (from the review),

The Dialogues is a series of nine conversations about science told in graphic novel format; the conversationalists are men, women, children, and amateur science buffs who all have something to say about the nature of the universe. Their discussions range from multiverse and string theory to immortality, black holes, and how it’s possible to put just a cup of rice in the pan but end up with a ton more after Mom cooks it. Johnson (who also illustrated the book) believes the graphic form is especially suited for physics because “one drawing can show what it would take many words to explain” and it’s hard to argue with his noble intentions, but despite some undoubtedly thoughtful content The Dialogues doesn’t really work. Why not? Because, even with its plethora of brightly-coloured pictures, it’s still 200+ pages of talking heads. The individual conversations might give us plenty to think about, but the absence of any genuine action (or even a sense of humour) still makes The Dialogues read like very pretty homework.

Adelmar Bultheel’s December 8, 2017 review for the European Mathematical Society acknowledges issues with the book while noting its strong points,

So what is the point of producing such a graphic novel if the reader is not properly instructed about anything? In my opinion, the true message can be found in the one or two pages of notes that follow each of the eleven conversations. If you are not into the subject that you were eavesdropping, you probably have heard words, concepts, theories, etc. that you did not understand, or you might just be curious about what exactly the two were discussing. Then you should look that up on the web, or if you want to do it properly, you should consult some literature. This is what these notes are providing: they are pointing to the proper books to consult. …

This is a most unusual book for this subject and the way this is approached is most surprising. Not only the contents is heavy stuff, it is also physically heavy to read. Some 250 pages on thick glossy paper makes it a quite heavy book to hold. You probably do not want to read this in bed or take it on a train, unless you have a table in front of you to put it on. Many subjects are mentioned, but not all are explained in detail. The reader should definitely be prepared to do some extra reading to understand things better. Since most references concern other popularising books on the subject, it may require quite a lot of extra reading. But all this hard science is happening in conversations by young enthusiastic people in casual locations and it is all wrapped up in beautiful graphics showing marvellous realistic decors.

I am fascinated by this book which I have yet to read but I did find a trailer for it (from thedialoguesbook.com),

Enjoy!

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin

The upcoming performance of a quantum computer built by D-Wave Systems (a Canadian company) and Welsh mezzo soprano Juliette Pochin is the première of “Superposition” by Alexis Kirke. A July 13, 2016 news item on phys.org provides more detail,

What happens when you combine the pure tones of an internationally renowned mezzo soprano and the complex technology of a $15million quantum supercomputer?

The answer will be exclusively revealed to audiences at the Port Eliot Festival [Cornwall, UK] when Superposition, created by Plymouth University composer Alexis Kirke, receives its world premiere later this summer.

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A July 13, 2016 Plymouth University press release, which originated the news item, expands on the theme,

Combining the arts and sciences, as Dr Kirke has done with many of his previous works, the 15-minute piece will begin dark and mysterious with celebrated performer Juliette Pochin singing a low-pitched slow theme.

But gradually the quiet sounds of electronic ambience will emerge over or beneath her voice, as the sounds of her singing are picked up by a microphone and sent over the internet to the D-Wave quantum computer at the University of Southern California.

It then reacts with behaviours in the quantum realm that are turned into sounds back in the performance venue, the Round Room at Port Eliot, creating a unique and ground-breaking duet.

And when the singer ends, the quantum processes are left to slowly fade away naturally, making their final sounds as the lights go to black.

Dr Kirke, a member of the Interdisciplinary Centre for Computer Music Research at Plymouth University, said:

“There are only a handful of these computers accessible in the world, and this is the first time one has been used as part of a creative performance. So while it is a great privilege to be able to put this together, it is an incredibly complex area of computing and science and it has taken almost two years to get to this stage. For most people, this will be the first time they have seen a quantum computer in action and I hope it will give them a better understanding of how it works in a creative and innovative way.”

Plymouth University is the official Creative and Cultural Partner of the Port Eliot Festival, taking place in South East Cornwall from July 28 to 31, 2016 [emphasis mine].

And Superposition will be one of a number of showcases of University talent and expertise as part of the first Port Eliot Science Lab. Being staged in the Round Room at Port Eliot, it will give festival goers the chance to explore science, see performances and take part in a range of experiments.

The three-part performance will tell the story of Niobe, one of the more tragic figures in Greek mythology, but in this case a nod to the fact the heart of the quantum computer contains the metal named after her, niobium. It will also feature a monologue from Hamlet, interspersed with terms from quantum computing.

This is the latest of Dr Kirke’s pioneering performance works, with previous productions including an opera based on the financial crisis and a piece using a cutting edge wave-testing facility as an instrument of percussion.

Geordie Rose, CTO and Founder, D-Wave Systems, said:

“D-Wave’s quantum computing technology has been investigated in many areas such as image recognition, machine learning and finance. We are excited to see Dr Kirke, a pioneer in the field of quantum physics and the arts, utilising a D-Wave 2X in his next performance. Quantum computing is positioned to have a tremendous social impact, and Dr Kirke’s work serves not only as a piece of innovative computer arts research, but also as a way of educating the public about these new types of exotic computing machines.”

Professor Daniel Lidar, Director of the USC Center for Quantum Information Science and Technology, said:

“This is an exciting time to be in the field of quantum computing. This is a field that was purely theoretical until the 1990s and now is making huge leaps forward every year. We have been researching the D-Wave machines for four years now, and have recently upgraded to the D-Wave 2X – the world’s most advanced commercially available quantum optimisation processor. We were very happy to welcome Dr Kirke on a short training residence here at the University of Southern California recently; and are excited to be collaborating with him on this performance, which we see as a great opportunity for education and public awareness.”

Since I can’t be there, I’m hoping they will be able to successfully livestream the performance. According to Kirke who very kindly responded to my query, the festival’s remote location can make livecasting a challenge. He did note that a post-performance documentary is planned and there will be footage from the performance.

He has also provided more information about the singer and the technical/computer aspects of the performance (from a July 18, 2016 email),

Juliette Pochin: I’ve worked with her before a couple of years ago. She has an amazing voice and style, is musically adventurousness (she is a music producer herself), and brings great grace and charisma to a performance. She can be heard in the Harry Potter and Lord of the Rings soundtracks and has performed at venues such as the Royal Albert Hall, Proms in the Park, and Meatloaf!

Score: The score is in 3 parts of about 5 minutes each. There is a traditional score for parts 1 and 3 that Juliette will sing from. I wrote these manually in traditional music notation. However she can sing in free time and wait for the computer to respond. It is a very dramatic score, almost operatic. The computer’s responses are based on two algorithms: a superposition chord system, and a pitch-loudness entanglement system. The superposition chord system sends a harmony problem to the D-Wave in response to Juliette’s approximate pitch amongst other elements. The D-Wave uses an 8-qubit optimizer to return potential chords. Each potential chord has an energy associated with it. In theory the lowest energy chord is that preferred by the algorithm. However in the performance I will combine the chord solutions to create superposition chords. These are chords which represent, in a very loose way, the superposed solutions which existed in the D-Wave before collapse of the qubits. Technically they are the results of multiple collapses, but metaphorically I can’t think of a more beautiful representation of superposition: chords. These will accompany Juliette, sometimes clashing with her. Sometimes giving way to her.

The second subsystem generates non-pitched noises of different lengths, roughnesses and loudness. These are responses to Juliette, but also a result of a simple D-Wave entanglement. We know the D-Wave can entangle in 8-qubit groups. I send a binary representation of the Juliette’s loudness to 4 qubits and one of approximate pitch to another 4, then entangle the two. The chosen entanglement weights are selected for their variety of solutions amongst the qubits, rather than by a particular musical logic. So the non-pitched subsystem is more of a sonification of entanglement than a musical algorithm.

Thank you Dr. Kirke for a fascinating technical description and for a description of Juliette Pochin that makes one long to hear her in performance.

For anyone who’s thinking of attending the performance or curious, you can find out more about the Port Eliot festival here, Juliette Pochin here, and Alexis Kirke here.

For anyone wondering about data sonficiatiion, I also have a Feb. 7, 2014 post featuring a data sonification project by Dr. Domenico Vicinanza which includes a sound clip of his Voyager 1 & 2 spacecraft duet.

Mass production of nanoparticles?

With all the years of nanotechnology and nanomaterials research it seems strange that mass production of nanoparticles is still very much in the early stages as a Feb. 24, 2016 news item on phys.org points out,

Nanoparticles – tiny particles 100,000 times smaller than the width of a strand of hair – can be found in everything from drug delivery formulations to pollution controls on cars to HD TV sets. With special properties derived from their tiny size and subsequently increased surface area, they’re critical to industry and scientific research.

They’re also expensive and tricky to make.

Now, researchers at USC [University of Southern California] have created a new way to manufacture nanoparticles that will transform the process from a painstaking, batch-by-batch drudgery into a large-scale, automated assembly line.

A Feb. 24, 2016 USC news release (also on EurekAlert) by Robert Perkins, which originated the news item, offers additional insight,

Consider, for example, gold nanoparticles. They have been shown to easily penetrate cell membranes without causing any damage — an unusual feat given that most penetrations of cell membranes by foreign objects can damage or kill the cell. Their ability to slip through the cell’s membrane makes gold nanoparticles ideal delivery devices for medications to healthy cells or fatal doses of radiation to cancer cells.

However, a single milligram of gold nanoparticles currently costs about $80 (depending on the size of the nanoparticles). That places the price of gold nanoparticles at $80,000 per gram while a gram of pure, raw gold goes for about $50.

“It’s not the gold that’s making it expensive,” Malmstadt [Noah Malmstadt of the USC Viterbi School of Engineering] said. “We can make them, but it’s not like we can cheaply make a 50-gallon drum full of them.”

A fluid situation

At this time, the process of manufacturing a nanoparticle typically involves a technician in a chemistry lab mixing up a batch of chemicals by hand in traditional lab flasks and beakers.

The new technique used by Brutchey [Richard Brutchey of the USC Dornsife College of Letters, Arts and Sciences] and Malmstadt instead relies on microfluidics — technology that manipulates tiny droplets of fluid in narrow channels.

“In order to go large scale, we have to go small,” Brutchey said.

Really small.

The team 3-D printed tubes about 250 micrometers in diameter, which they believe to be the smallest, fully enclosed 3-D printed tubes anywhere. For reference, your average-sized speck of dust is 50 micrometers wide.

They they built a parallel network of four of these tubes, side-by-side, and ran a combination of two nonmixing fluids (like oil and water) through them. As the two fluids fought to get out through the openings, they squeezed off tiny droplets. Each of these droplets acted as a micro-scale chemical reactor in which materials were mixed and nanoparticles were generated. Each microfluidic tube can create millions of identical droplets that perform the same reaction.

This sort of system has been envisioned in the past, but it hasn’t been able to be scaled up because the parallel structure meant that if one tube got jammed, it would cause a ripple effect of changing pressures along its neighbors, knocking out the entire system. Think of it like losing a single Christmas light in one of the old-style strands — lose one and you lose them all.

Brutchey and Malmstadt bypassed this problem by altering the geometry of the tubes themselves, shaping the junction between the tubes such that the particles come out a uniform size and the system is immune to pressure changes.

Here’s a link to and a citation for the paper,

Flow invariant droplet formation for stable parallel microreactors by Carson T. Riche, Emily J. Roberts, Malancha Gupta, Richard L. Brutchey & Noah Malmstadt. Nature Communications 7, Article number: 10780 doi:10.1038/ncomms10780 Published 23 February 2016

This is an open access paper.

Handling massive digital datasets the quantum way

A Jan. 25, 2016 news item on phys.org describes a new approach to analyzing and managing huge datasets,

From gene mapping to space exploration, humanity continues to generate ever-larger sets of data—far more information than people can actually process, manage, or understand.

Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way.

Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at [Massachusetts Institute of Technology] MIT, the University of Waterloo, and the University of Southern California [USC}.

A Jan. 25, 2016 MIT news release (*also on EurekAlert*), which originated the news item, describes the theory in more detail,

… Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world.

In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd [ explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.”

It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.”

Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations.

Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.

“That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd.

“Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says.

There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says.

The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” says Lloyd, who holds a joint appointment as a professor of physics. But the limits of classical computation have prevented such approaches from being applied before.

While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.”

Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.”

Here’s a link to and a citation for the paper,

Quantum algorithms for topological and geometric analysis of data by Seth Lloyd, Silvano Garnerone, & Paolo Zanardi. Nature Communications 7, Article number: 10138 doi:10.1038/ncomms10138 Published 25 January 2016

This paper is open access.

ETA Jan. 25, 2016 1245 hours PST,

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

*’also on EurekAlert’ text and link added Jan. 26, 2016.

D-Wave upgrades Google’s quantum computing capabilities

Vancouver-based (more accurately, Burnaby-based) D-Wave systems has scored a coup as key customers have upgraded from a 512-qubit system to a system with over 1,000 qubits. (The technical breakthrough and concomitant interest from the business community was mentioned here in a June 26, 2015 posting.) As for the latest business breakthrough, here’s more from a Sept. 28, 2015 D-Wave press release,

D-Wave Systems Inc., the world’s first quantum computing company, announced that it has entered into a new agreement covering the installation of a succession of D-Wave systems located at NASA’s Ames Research Center in Moffett Field, California. This agreement supports collaboration among Google, NASA and USRA (Universities Space Research Association) that is dedicated to studying how quantum computing can advance artificial intelligence and machine learning, and the solution of difficult optimization problems. The new agreement enables Google and its partners to keep their D-Wave system at the state-of-the-art for up to seven years, with new generations of D-Wave systems to be installed at NASA Ames as they become available.

“The new agreement is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” said D-Wave CEO Vern Brownell. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”

Cade Wetz’s Sept. 28, 2015 article for Wired magazine provides some interesting observations about D-Wave computers along with some explanations of quantum computing (Note: Links have been removed),

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California [USC] have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

D-Wave claims to have a found a solution to the decoherence problem and that appears to be borne out by the USC researchers. Still, it isn’t a general quantum computer (from Wetz’s article),

… researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

It takes a lot of innovation before you make big strides forward and I think D-Wave is to be congratulated on producing what is to my knowledge the only commercially available form of quantum computing of any sort in the world.

ETA Oct. 6, 2015* at 1230 hours PST: Minutes after publishing about D-Wave I came across this item (h/t Quirks & Quarks twitter) about Australian researchers and their quantum computing breakthrough. From an Oct. 6, 2015 article by Hannah Francis for the Sydney (Australia) Morning Herald,

For decades scientists have been trying to turn quantum computing — which allows for multiple calculations to happen at once, making it immeasurably faster than standard computing — into a practical reality rather than a moonshot theory. Until now, they have largely relied on “exotic” materials to construct quantum computers, making them unsuitable for commercial production.

But researchers at the University of New South Wales have patented a new design, published in the scientific journal Nature on Tuesday, created specifically with computer industry manufacturing standards in mind and using affordable silicon, which is found in regular computer chips like those we use every day in smartphones or tablets.

“Our team at UNSW has just cleared a major hurdle to making quantum computing a reality,” the director of the university’s Australian National Fabrication Facility, Andrew Dzurak, the project’s leader, said.

“As well as demonstrating the first quantum logic gate in silicon, we’ve also designed and patented a way to scale this technology to millions of qubits using standard industrial manufacturing techniques to build the world’s first quantum processor chip.”

According to the article, the university is looking for industrial partners to help them exploit this breakthrough. Fisher’s article features an embedded video, as well as, more detail.

*It was Oct. 6, 2015 in Australia but Oct. 5, 2015 my side of the international date line.

ETA Oct. 6, 2015 (my side of the international date line): An Oct. 5, 2015 University of New South Wales news release on EurekAlert provides additional details.

Here’s a link to and a citation for the paper,

A two-qubit logic gate in silicon by M. Veldhorst, C. H. Yang, J. C. C. Hwang, W. Huang,    J. P. Dehollain, J. T. Muhonen, S. Simmons, A. Laucht, F. E. Hudson, K. M. Itoh, A. Morello    & A. S. Dzurak. Nature (2015 doi:10.1038/nature15263 Published online 05 October 2015

This paper is behind a paywall.

Replace silicon with black phosphorus instead of graphene?

I have two black phosphorus pieces. This first piece of research comes out of ‘La belle province’ or, as it’s more usually called, Québec (Canada).

Foundational research on phosphorene

There’s a lot of interest in replacing silicon for a number of reasons and, increasingly, there’s interest in finding an alternative to graphene.

A July 7, 2015 news item on Nanotechnology Now describes a new material for use as transistors,

As scientists continue to hunt for a material that will make it possible to pack more transistors on a chip, new research from McGill University and Université de Montréal adds to evidence that black phosphorus could emerge as a strong candidate.

In a study published today in Nature Communications, the researchers report that when electrons move in a phosphorus transistor, they do so only in two dimensions. The finding suggests that black phosphorus could help engineers surmount one of the big challenges for future electronics: designing energy-efficient transistors.

A July 7, 2015 McGill University news release on EurekAlert, which originated the news item, describes the field of 2D materials and the research into black phosphorus and its 2D version, phosperene (analogous to graphite and graphene),

“Transistors work more efficiently when they are thin, with electrons moving in only two dimensions,” says Thomas Szkopek, an associate professor in McGill’s Department of Electrical and Computer Engineering and senior author of the new study. “Nothing gets thinner than a single layer of atoms.”

In 2004, physicists at the University of Manchester in the U.K. first isolated and explored the remarkable properties of graphene — a one-atom-thick layer of carbon. Since then scientists have rushed to to investigate a range of other two-dimensional materials. One of those is black phosphorus, a form of phosphorus that is similar to graphite and can be separated easily into single atomic layers, known as phosphorene.

Phosphorene has sparked growing interest because it overcomes many of the challenges of using graphene in electronics. Unlike graphene, which acts like a metal, black phosphorus is a natural semiconductor: it can be readily switched on and off.

“To lower the operating voltage of transistors, and thereby reduce the heat they generate, we have to get closer and closer to designing the transistor at the atomic level,” Szkopek says. “The toolbox of the future for transistor designers will require a variety of atomic-layered materials: an ideal semiconductor, an ideal metal, and an ideal dielectric. All three components must be optimized for a well designed transistor. Black phosphorus fills the semiconducting-material role.”

The work resulted from a multidisciplinary collaboration among Szkopek’s nanoelectronics research group, the nanoscience lab of McGill Physics Prof. Guillaume Gervais, and the nanostructures research group of Prof. Richard Martel in Université de Montréal’s Department of Chemistry.

To examine how the electrons move in a phosphorus transistor, the researchers observed them under the influence of a magnetic field in experiments performed at the National High Magnetic Field Laboratory in Tallahassee, FL, the largest and highest-powered magnet laboratory in the world. This research “provides important insights into the fundamental physics that dictate the behavior of black phosphorus,” says Tim Murphy, DC Field Facility Director at the Florida facility.

“What’s surprising in these results is that the electrons are able to be pulled into a sheet of charge which is two-dimensional, even though they occupy a volume that is several atomic layers in thickness,” Szkopek says. That finding is significant because it could potentially facilitate manufacturing the material — though at this point “no one knows how to manufacture this material on a large scale.”

“There is a great emerging interest around the world in black phosphorus,” Szkopek says. “We are still a long way from seeing atomic layer transistors in a commercial product, but we have now moved one step closer.”

Here’s a link to and a citation for the paper,

Two-dimensional magnetotransport in a black phosphorus naked quantum well by V. Tayari, N. Hemsworth, I. Fakih, A. Favron, E. Gaufrès, G. Gervais, R. Martel & T. Szkopek. Nature Communications 6, Article number: 7702 doi:10.1038/ncomms8702 Published 07 July 2015

This is an open access paper.

The second piece of research into black phosphorus is courtesy of an international collaboration.

A phosporene transistor

A July 9, 2015 Technical University of Munich (TUM) press release (also on EurekAlert) describes the formation of a phosphorene transistor made possible by the introduction of arsenic,

Chemists at the Technische Universität München (TUM) have now developed a semiconducting material in which individual phosphorus atoms are replaced by arsenic. In a collaborative international effort, American colleagues have built the first field-effect transistors from the new material.

For many decades silicon has formed the basis of modern electronics. To date silicon technology could provide ever tinier transistors for smaller and smaller devices. But the size of silicon transistors is reaching its physical limit. Also, consumers would like to have flexible devices, devices that can be incorporated into clothing and the likes. However, silicon is hard and brittle. All this has triggered a race for new materials that might one day replace silicon.

Black arsenic phosphorus might be such a material. Like graphene, which consists of a single layer of carbon atoms, it forms extremely thin layers. The array of possible applications ranges from transistors and sensors to mechanically flexible semiconductor devices. Unlike graphene, whose electronic properties are similar to those of metals, black arsenic phosphorus behaves like a semiconductor.

The press release goes on to provide more detail about the collaboration and the research,

A cooperation between the Technical University of Munich and the University of Regensburg on the German side and the University of Southern California (USC) and Yale University in the United States has now, for the first time, produced a field effect transistor made of black arsenic phosphorus. The compounds were synthesized by Marianne Koepf at the laboratory of the research group for Synthesis and Characterization of Innovative Materials at the TUM. The field effect transistors were built and characterized by a group headed by Professor Zhou and Dr. Liu at the Department of Electrical Engineering at USC.

The new technology developed at TUM allows the synthesis of black arsenic phosphorus without high pressure. This requires less energy and is cheaper. The gap between valence and conduction bands can be precisely controlled by adjusting the arsenic concentration. “This allows us to produce materials with previously unattainable electronic and optical properties in an energy window that was hitherto inaccessible,” says Professor Tom Nilges, head of the research group for Synthesis and Characterization of Innovative Materials.

Detectors for infrared

With an arsenic concentration of 83 percent the material exhibits an extremely small band gap of only 0.15 electron volts, making it predestined for sensors which can detect long wavelength infrared radiation. LiDAR (Light Detection and Ranging) sensors operate in this wavelength range, for example. They are used, among other things, as distance sensors in automobiles. Another application is the measurement of dust particles and trace gases in environmental monitoring.

A further interesting aspect of these new, two-dimensional semiconductors is their anisotropic electronic and optical behavior. The material exhibits different characteristics along the x- and y-axes in the same plane. To produce graphene like films the material can be peeled off in ultra thin layers. The thinnest films obtained so far are only two atomic layers thick.

Here’s a link to and a citation for the paper,

Black Arsenic–Phosphorus: Layered Anisotropic Infrared Semiconductors with Highly Tunable Compositions and Properties by Bilu Liu, Marianne Köpf, Ahmad N. Abbas, Xiaomu Wang, Qiushi Guo, Yichen Jia, Fengnian Xia, Richard Weihrich, Frederik Bachhuber, Florian Pielnhofer, Han Wang, Rohan Dhall, Stephen B. Cronin, Mingyuan Ge1 Xin Fang, Tom Nilges, and Chongwu Zhou. DOI: 10.1002/adma.201501758 Article first published online: 25 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Dexter Johnson, on his Nanoclast blog (on the Institute for Electrical and Electronics Engineers website), adds more information about black phosphorus and its electrical properties in his July 9, 2015 posting about the Germany/US collaboration (Note: Links have been removed),

Black phosphorus has been around for about 100 years, but recently it has been synthesized as a two-dimensional material—dubbed phosphorene in reference to its two-dimensional cousin, graphene. Black phosphorus is quite attractive for electronic applications like field-effect transistors because of its inherent band gap and it is one of the few 2-D materials to be a natively p-type semiconductor.

One final comment, I notice the Germany-US work was published weeks prior to the Canadian research suggesting that the TUM July 9, 2015 press release is an attempt to capitalize on the interest generated by the Canadian research. That’s a smart move.

What is a buckybomb?

I gather buckybombs have something to do with cancer treatments. From a March 18, 2015 news item on ScienceDaily,

In 1996, a trio of scientists won the Nobel Prize for Chemistry for their discovery of Buckminsterfullerene — soccer-ball-shaped spheres of 60 joined carbon atoms that exhibit special physical properties.

Now, 20 years later, scientists have figured out how to turn them into Buckybombs.

These nanoscale explosives show potential for use in fighting cancer, with the hope that they could one day target and eliminate cancer at the cellular level — triggering tiny explosions that kill cancer cells with minimal impact on surrounding tissue.

“Future applications would probably use other types of carbon structures — such as carbon nanotubes, but we started with Bucky-balls because they’re very stable, and a lot is known about them,” said Oleg V. Prezhdo, professor of chemistry at the USC [University of Southern California] Dornsife College of Letters, Arts and Sciences and corresponding author of a paper on the new explosives that was published in The Journal of Physical Chemistry on February 24 [2015].

A March 19, 2015 USC news release by Robert Perkins, which despite its publication date originated the news item, describes current cancer treatments with carbon nanotubes and this new technique with fullerenes,

Carbon nanotubes, close relatives of Bucky-balls, are used already to treat cancer. They can be accumulated in cancer cells and heated up by a laser, which penetrates through surrounding tissues without affecting them and directly targets carbon nanotubes. Modifying carbon nanotubes the same way as the Buckybombs will make the cancer treatment more efficient — reducing the amount of treatment needed, Prezhdo said.

To build the miniature explosives, Prezhdo and his colleagues attached 12 nitrous oxide molecules to a single Bucky-ball and then heated it. Within picoseconds, the Bucky-ball disintegrated — increasing temperature by thousands of degrees in a controlled explosion.

The source of the explosion’s power is the breaking of powerful carbon bonds, which snap apart to bond with oxygen from the nitrous oxide, resulting in the creation of carbon dioxide, Prezhdo said.

I’m glad this technique would make treatment more effective but I do pause at the thought of having exploding buckyballs in my body or, for that matter, anyone else’s.

The research was highlighted earlier this month in a March 5, 2015 article by Lisa Zynga for phys.org,

The buckybomb combines the unique properties of two classes of materials: carbon structures and energetic nanomaterials. Carbon materials such as C60 can be chemically modified fairly easily to change their properties. Meanwhile, NO2 groups are known to contribute to detonation and combustion processes because they are a major source of oxygen. So, the scientists wondered what would happen if NO2 groups were attached to C60 molecules: would the whole thing explode? And how?

The simulations answered these questions by revealing the explosion in step-by-step detail. Starting with an intact buckybomb (technically called dodecanitrofullerene, or C60(NO2)12), the researchers raised the simulated temperature to 1000 K (700 °C). Within a picosecond (10-12 second), the NO2 groups begin to isomerize, rearranging their atoms and forming new groups with some of the carbon atoms from the C60. As a few more picoseconds pass, the C60 structure loses some of its electrons, which interferes with the bonds that hold it together, and, in a flash, the large molecule disintegrates into many tiny pieces of diatomic carbon (C2). What’s left is a mixture of gases including CO2, NO2, and N2, as well as C2.

I encourage you to read Zynga’s article in whole as she provides more scientific detail and she notes that this discovery could have applications for the military and for industry.

Here’s a link to and a citation for the researchers’ paper,

Buckybomb: Reactive Molecular Dynamics Simulation by Vitaly V. Chaban, Eudes Eterno Fileti, and Oleg V. Prezhdo. J. Phys. Chem. Lett., 2015, 6 (5), pp 913–917 DOI: 10.1021/acs.jpclett.5b00120 Publication Date (Web): February 24, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

More investment money for Canada’s D-Wave Systems (quantum computing)

A Feb. 2, 2015 news item on Nanotechnology Now features D-Wave Systems (located in the Vancouver region, Canada) and its recent funding bonanza of $28M dollars,

Harris & Harris Group, Inc. (Nasdaq:TINY), an investor in transformative companies enabled by disruptive science, notes the announcement by portfolio company, D-Wave Systems, Inc., that it has closed $29 million (CAD) in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million (CAD) raised in 2014. Harris & Harris Group’s total investment in D-Wave is approximately $5.8 million (USD). D-Wave’s announcement also includes highlights of 2014, a year of strong growth and advancement for D-Wave.

A Jan. 29, 2015 D-Wave news release provides more details about the new investment and D-Wave’s 2014 triumphs,

D-Wave Systems Inc., the world’s first quantum computing company, today announced that it has closed $29 million in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million raised in 2014.

“The investment is a testament to the progress D-Wave continues to make as the leader in quantum computing systems,” said Vern Brownell, CEO of D-Wave. “The funding we received in 2014 will advance our quantum hardware and software development, as well as our work on leading edge applications of our systems. By making quantum computing available to more organizations, we’re driving our goal of finding solutions to the most complex optimization and machine learning applications in national defense, computing, research and finance.”

The funding follows a year of strong growth and advancement for D-Wave. Highlights include:

•    Significant progress made towards the release of the next D-Wave quantum system featuring a 1000 qubit processor, which is currently undergoing testing in D-Wave’s labs.
•    The company’s patent portfolio grew to over 150 issued patents worldwide, with 11 new U.S. patents being granted in 2014, covering aspects of D-Wave’s processor technology, systems and techniques for solving computational problems using D-Wave’s technology.
•    D-Wave Professional Services launched, providing quantum computing experts to collaborate directly with customers, and deliver training classes on the usage and programming of the D-Wave system to a number of national laboratories, businesses and universities.
•    Partnerships were established with DNA-SEQ and 1QBit, companies that are developing quantum software applications in the spheres of medicine and finance, respectively.
•    Research throughout the year continued to validate D-Wave’s work, including a study showing further evidence of quantum entanglement by D-Wave and USC  [University of Southern California] scientists, published in Physical Review X this past May.

Since 2011, some of the most prestigious organizations in the world, including Lockheed Martin, NASA, Google, USC and the Universities Space Research Association (USRA), have partnered with D-Wave to use their quantum computing systems. In 2015, these partners will continue to work with the D-Wave computer, conducting pioneering research in machine learning, optimization, and space exploration.

D-Wave, which already employs over 120 people, plans to expand hiring with the additional funding. Key areas of growth include research, processor and systems development and software engineering.

Harris & Harris Group offers a description of D-Wave which mentions nanotechnology and hosts a couple of explanatory videos,

D-Wave Systems develops an adiabatic quantum computer (QC).

Status
Privately Held

The Market
Electronics – High Performance Computing

The Problem
Traditional or “classical computers” are constrained by the sequential character of data processing that makes the solving of non-polynomial (NP)-hard problems difficult or potentially impossible in reasonable timeframes. These types of computationally intense problems are commonly observed in software verifications, scheduling and logistics planning, integer programming, bioinformatics and financial portfolio optimization.

D-Wave’s Solution
D-Wave develops quantum computers that are capable of processing data quantum mechanical properties of matter. This leverage of quantum mechanics enables the identification of solutions to some non-polynomial (NP)-hard problems in a reasonable timeframe, instead of the exponential time needed for any classical digital computer. D-Wave sold and installed its first quantum computing system to a commercial customer in 2011.

Nanotechnology Factor
To function properly, D-wave processor requires tight control and manipulation of quantum mechanical phenomena. This control and manipulation is achieved by creating integrated circuits based on Josephson Junctions and other superconducting circuitry. By picking superconductors, D-wave managed to combine quantum mechanical behavior with macroscopic dimensions needed for hi-yield design and manufacturing.

It seems D-Wave has made some research and funding strides since I last wrote about the company in a Jan. 19, 2012 posting, although there is no mention of quantum computer sales.