Tag Archives: University of Southern California

How might artificial intelligence affect urban life in 2030? A study

Peering into the future is always a chancy business as anyone who’s seen those film shorts from the 1950’s and 60’s which speculate exuberantly as to what the future will bring knows.

A sober approach (appropriate to our times) has been taken in a study about the impact that artificial intelligence might have by 2030. From a Sept. 1, 2016 Stanford University news release (also on EurekAlert) by Tom Abate (Note: Links have been removed),

A panel of academic and industrial thinkers has looked ahead to 2030 to forecast how advances in artificial intelligence (AI) might affect life in a typical North American city – in areas as diverse as transportation, health care and education ­– and to spur discussion about how to ensure the safe, fair and beneficial development of these rapidly emerging technologies.

Titled “Artificial Intelligence and Life in 2030,” this year-long investigation is the first product of the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted by Stanford to inform societal deliberation and provide guidance on the ethical development of smart software, sensors and machines.

“We believe specialized AI applications will become both increasingly common and more useful by 2030, improving our economy and quality of life,” said Peter Stone, a computer scientist at the University of Texas at Austin and chair of the 17-member panel of international experts. “But this technology will also create profound challenges, affecting jobs and incomes and other issues that we should begin addressing now to ensure that the benefits of AI are broadly shared.”

The new report traces its roots to a 2009 study that brought AI scientists together in a process of introspection that became ongoing in 2014, when Eric and Mary Horvitz created the AI100 endowment through Stanford. AI100 formed a standing committee of scientists and charged this body with commissioning periodic reports on different aspects of AI over the ensuing century.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” said Russ Altman, a professor of bioengineering and the Stanford faculty director of AI100. “Stanford is excited to host this process of introspection. This work makes practical contribution to the public debate on the roles and implications of artificial intelligence.”

The AI100 standing committee first met in 2015, led by chairwoman and Harvard computer scientist Barbara Grosz. It sought to convene a panel of scientists with diverse professional and personal backgrounds and enlist their expertise to assess the technological, economic and policy implications of potential AI applications in a societally relevant setting.

“AI technologies can be reliable and broadly beneficial,” Grosz said. “Being transparent about their design and deployment challenges will build trust and avert unjustified fear and suspicion.”

The report investigates eight domains of human activity in which AI technologies are beginning to affect urban life in ways that will become increasingly pervasive and profound by 2030.

The 28,000-word report includes a glossary to help nontechnical readers understand how AI applications such as computer vision might help screen tissue samples for cancers or how natural language processing will allow computerized systems to grasp not simply the literal definitions, but the connotations and intent, behind words.

The report is broken into eight sections focusing on applications of AI. Five examine application arenas such as transportation where there is already buzz about self-driving cars. Three other sections treat technological impacts, like the section on employment and workplace trends which touches on the likelihood of rapid changes in jobs and incomes.

“It is not too soon for social debate on how the fruits of an AI-dominated economy should be shared,” the researchers write in the report, noting also the need for public discourse.

“Currently in the United States, at least sixteen separate agencies govern sectors of the economy related to AI technologies,” the researchers write, highlighting issues raised by AI applications: “Who is responsible when a self-driven car crashes or an intelligent medical device fails? How can AI applications be prevented from [being used for] racial discrimination or financial cheating?”

The eight sections discuss:

Transportation: Autonomous cars, trucks and, possibly, aerial delivery vehicles may alter how we commute, work and shop and create new patterns of life and leisure in cities.

Home/service robots: Like the robotic vacuum cleaners already in some homes, specialized robots will clean and provide security in live/work spaces that will be equipped with sensors and remote controls.

Health care: Devices to monitor personal health and robot-assisted surgery are hints of things to come if AI is developed in ways that gain the trust of doctors, nurses, patients and regulators.

Education: Interactive tutoring systems already help students learn languages, math and other skills. More is possible if technologies like natural language processing platforms develop to augment instruction by humans.

Entertainment: The conjunction of content creation tools, social networks and AI will lead to new ways to gather, organize and deliver media in engaging, personalized and interactive ways.

Low-resource communities: Investments in uplifting technologies like predictive models to prevent lead poisoning or improve food distributions could spread AI benefits to the underserved.

Public safety and security: Cameras, drones and software to analyze crime patterns should use AI in ways that reduce human bias and enhance safety without loss of liberty or dignity.

Employment and workplace: Work should start now on how to help people adapt as the economy undergoes rapid changes as many existing jobs are lost and new ones are created.

“Until now, most of what is known about AI comes from science fiction books and movies,” Stone said. “This study provides a realistic foundation to discuss how AI technologies are likely to affect society.”

Grosz said she hopes the AI 100 report “initiates a century-long conversation about ways AI-enhanced technologies might be shaped to improve life and societies.”

You can find the A100 website here, and the group’s first paper: “Artificial Intelligence and Life in 2030” here. Unfortunately, I don’t have time to read the report but I hope to do so soon.

The AI100 website’s About page offered a surprise,

This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

“Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

  • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
  • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;

    This effort, called the One Hundred Year Study on Artificial Intelligence, or AI100, is the brainchild of computer scientist and Stanford alumnus Eric Horvitz who, among other credits, is a former president of the Association for the Advancement of Artificial Intelligence.

    In that capacity Horvitz convened a conference in 2009 at which top researchers considered advances in artificial intelligence and its influences on people and society, a discussion that illuminated the need for continuing study of AI’s long-term implications.

    Now, together with Russ Altman, a professor of bioengineering and computer science at Stanford, Horvitz has formed a committee that will select a panel to begin a series of periodic studies on how AI will affect automation, national security, psychology, ethics, law, privacy, democracy and other issues.

    “Artificial intelligence is one of the most profound undertakings in science, and one that will affect every aspect of human life,” said Stanford President John Hennessy, who helped initiate the project. “Given’s Stanford’s pioneering role in AI and our interdisciplinary mindset, we feel obliged and qualified to host a conversation about how artificial intelligence will affect our children and our children’s children.”

    Five leading academicians with diverse interests will join Horvitz and Altman in launching this effort. They are:

    • Barbara Grosz, the Higgins Professor of Natural Sciences at HarvardUniversity and an expert on multi-agent collaborative systems;
    • Deirdre K. Mulligan, a lawyer and a professor in the School of Information at the University of California, Berkeley, who collaborates with technologists to advance privacy and other democratic values through technical design and policy;
    • Yoav Shoham, a professor of computer science at Stanford, who seeks to incorporate common sense into AI;
    • Tom Mitchell, the E. Fredkin University Professor and chair of the machine learning department at Carnegie Mellon University, whose studies include how computers might learn to read the Web;
    • and Alan Mackworth, a professor of computer science at the University of British Columbia [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

    I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

    Study Panels

    Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

    2015 Study Panel Members

    • Peter Stone, UT Austin, Chair
    • Rodney Brooks, Rethink Robotics
    • Erik Brynjolfsson, MIT
    • Ryan Calo, University of Washington
    • Oren Etzioni, Allen Institute for AI
    • Greg Hager, Johns Hopkins University
    • Julia Hirschberg, Columbia University
    • Shivaram Kalyanakrishnan, IIT Bombay
    • Ece Kamar, Microsoft
    • Sarit Kraus, Bar Ilan University
    • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
    • David Parkes, Harvard
    • Bill Press, UT Austin
    • AnnaLee (Anno) Saxenian, Berkeley
    • Julie Shah, MIT
    • Milind Tambe, USC
    • Astro Teller, Google[X]
  • [emphases mine] and the Canada Research Chair in Artificial Intelligence, who built the world’s first soccer-playing robot.

I wasn’t expecting to see a Canadian listed as a member of the AI100 standing committee and then I got another surprise (from the AI100 People webpage),

Study Panels

Study Panels are planned to convene every 5 years to examine some aspect of AI and its influences on society and the world. The first study panel was convened in late 2015 to study the likely impacts of AI on urban life by the year 2030, with a focus on typical North American cities.

2015 Study Panel Members

  • Peter Stone, UT Austin, Chair
  • Rodney Brooks, Rethink Robotics
  • Erik Brynjolfsson, MIT
  • Ryan Calo, University of Washington
  • Oren Etzioni, Allen Institute for AI
  • Greg Hager, Johns Hopkins University
  • Julia Hirschberg, Columbia University
  • Shivaram Kalyanakrishnan, IIT Bombay
  • Ece Kamar, Microsoft
  • Sarit Kraus, Bar Ilan University
  • Kevin Leyton-Brown, [emphasis mine] UBC [University of British Columbia]
  • David Parkes, Harvard
  • Bill Press, UT Austin
  • AnnaLee (Anno) Saxenian, Berkeley
  • Julie Shah, MIT
  • Milind Tambe, USC
  • Astro Teller, Google[X]

I see they have representation from Israel, India, and the private sector as well. Refreshingly, there’s more than one woman on the standing committee and in this first study group. It’s good to see these efforts at inclusiveness and I’m particularly delighted with the inclusion of an organization from Asia. All too often inclusiveness means Europe, especially the UK. So, it’s good (and I think important) to see a different range of representation.

As for the content of report, should anyone have opinions about it, please do let me know your thoughts in the blog comments.

Cornwall (UK) connects with University of Southern California for performance by a quantum computer (D-Wave) and mezzo soprano Juliette Pochin

The upcoming performance of a quantum computer built by D-Wave Systems (a Canadian company) and Welsh mezzo soprano Juliette Pochin is the première of “Superposition” by Alexis Kirke. A July 13, 2016 news item on phys.org provides more detail,

What happens when you combine the pure tones of an internationally renowned mezzo soprano and the complex technology of a $15million quantum supercomputer?

The answer will be exclusively revealed to audiences at the Port Eliot Festival [Cornwall, UK] when Superposition, created by Plymouth University composer Alexis Kirke, receives its world premiere later this summer.

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A D-Wave 1000 Qubit Quantum Processor. Credit: D-Wave Systems Inc

A July 13, 2016 Plymouth University press release, which originated the news item, expands on the theme,

Combining the arts and sciences, as Dr Kirke has done with many of his previous works, the 15-minute piece will begin dark and mysterious with celebrated performer Juliette Pochin singing a low-pitched slow theme.

But gradually the quiet sounds of electronic ambience will emerge over or beneath her voice, as the sounds of her singing are picked up by a microphone and sent over the internet to the D-Wave quantum computer at the University of Southern California.

It then reacts with behaviours in the quantum realm that are turned into sounds back in the performance venue, the Round Room at Port Eliot, creating a unique and ground-breaking duet.

And when the singer ends, the quantum processes are left to slowly fade away naturally, making their final sounds as the lights go to black.

Dr Kirke, a member of the Interdisciplinary Centre for Computer Music Research at Plymouth University, said:

“There are only a handful of these computers accessible in the world, and this is the first time one has been used as part of a creative performance. So while it is a great privilege to be able to put this together, it is an incredibly complex area of computing and science and it has taken almost two years to get to this stage. For most people, this will be the first time they have seen a quantum computer in action and I hope it will give them a better understanding of how it works in a creative and innovative way.”

Plymouth University is the official Creative and Cultural Partner of the Port Eliot Festival, taking place in South East Cornwall from July 28 to 31, 2016 [emphasis mine].

And Superposition will be one of a number of showcases of University talent and expertise as part of the first Port Eliot Science Lab. Being staged in the Round Room at Port Eliot, it will give festival goers the chance to explore science, see performances and take part in a range of experiments.

The three-part performance will tell the story of Niobe, one of the more tragic figures in Greek mythology, but in this case a nod to the fact the heart of the quantum computer contains the metal named after her, niobium. It will also feature a monologue from Hamlet, interspersed with terms from quantum computing.

This is the latest of Dr Kirke’s pioneering performance works, with previous productions including an opera based on the financial crisis and a piece using a cutting edge wave-testing facility as an instrument of percussion.

Geordie Rose, CTO and Founder, D-Wave Systems, said:

“D-Wave’s quantum computing technology has been investigated in many areas such as image recognition, machine learning and finance. We are excited to see Dr Kirke, a pioneer in the field of quantum physics and the arts, utilising a D-Wave 2X in his next performance. Quantum computing is positioned to have a tremendous social impact, and Dr Kirke’s work serves not only as a piece of innovative computer arts research, but also as a way of educating the public about these new types of exotic computing machines.”

Professor Daniel Lidar, Director of the USC Center for Quantum Information Science and Technology, said:

“This is an exciting time to be in the field of quantum computing. This is a field that was purely theoretical until the 1990s and now is making huge leaps forward every year. We have been researching the D-Wave machines for four years now, and have recently upgraded to the D-Wave 2X – the world’s most advanced commercially available quantum optimisation processor. We were very happy to welcome Dr Kirke on a short training residence here at the University of Southern California recently; and are excited to be collaborating with him on this performance, which we see as a great opportunity for education and public awareness.”

Since I can’t be there, I’m hoping they will be able to successfully livestream the performance. According to Kirke who very kindly responded to my query, the festival’s remote location can make livecasting a challenge. He did note that a post-performance documentary is planned and there will be footage from the performance.

He has also provided more information about the singer and the technical/computer aspects of the performance (from a July 18, 2016 email),

Juliette Pochin: I’ve worked with her before a couple of years ago. She has an amazing voice and style, is musically adventurousness (she is a music producer herself), and brings great grace and charisma to a performance. She can be heard in the Harry Potter and Lord of the Rings soundtracks and has performed at venues such as the Royal Albert Hall, Proms in the Park, and Meatloaf!

Score: The score is in 3 parts of about 5 minutes each. There is a traditional score for parts 1 and 3 that Juliette will sing from. I wrote these manually in traditional music notation. However she can sing in free time and wait for the computer to respond. It is a very dramatic score, almost operatic. The computer’s responses are based on two algorithms: a superposition chord system, and a pitch-loudness entanglement system. The superposition chord system sends a harmony problem to the D-Wave in response to Juliette’s approximate pitch amongst other elements. The D-Wave uses an 8-qubit optimizer to return potential chords. Each potential chord has an energy associated with it. In theory the lowest energy chord is that preferred by the algorithm. However in the performance I will combine the chord solutions to create superposition chords. These are chords which represent, in a very loose way, the superposed solutions which existed in the D-Wave before collapse of the qubits. Technically they are the results of multiple collapses, but metaphorically I can’t think of a more beautiful representation of superposition: chords. These will accompany Juliette, sometimes clashing with her. Sometimes giving way to her.

The second subsystem generates non-pitched noises of different lengths, roughnesses and loudness. These are responses to Juliette, but also a result of a simple D-Wave entanglement. We know the D-Wave can entangle in 8-qubit groups. I send a binary representation of the Juliette’s loudness to 4 qubits and one of approximate pitch to another 4, then entangle the two. The chosen entanglement weights are selected for their variety of solutions amongst the qubits, rather than by a particular musical logic. So the non-pitched subsystem is more of a sonification of entanglement than a musical algorithm.

Thank you Dr. Kirke for a fascinating technical description and for a description of Juliette Pochin that makes one long to hear her in performance.

For anyone who’s thinking of attending the performance or curious, you can find out more about the Port Eliot festival here, Juliette Pochin here, and Alexis Kirke here.

For anyone wondering about data sonficiatiion, I also have a Feb. 7, 2014 post featuring a data sonification project by Dr. Domenico Vicinanza which includes a sound clip of his Voyager 1 & 2 spacecraft duet.

Mass production of nanoparticles?

With all the years of nanotechnology and nanomaterials research it seems strange that mass production of nanoparticles is still very much in the early stages as a Feb. 24, 2016 news item on phys.org points out,

Nanoparticles – tiny particles 100,000 times smaller than the width of a strand of hair – can be found in everything from drug delivery formulations to pollution controls on cars to HD TV sets. With special properties derived from their tiny size and subsequently increased surface area, they’re critical to industry and scientific research.

They’re also expensive and tricky to make.

Now, researchers at USC [University of Southern California] have created a new way to manufacture nanoparticles that will transform the process from a painstaking, batch-by-batch drudgery into a large-scale, automated assembly line.

A Feb. 24, 2016 USC news release (also on EurekAlert) by Robert Perkins, which originated the news item, offers additional insight,

Consider, for example, gold nanoparticles. They have been shown to easily penetrate cell membranes without causing any damage — an unusual feat given that most penetrations of cell membranes by foreign objects can damage or kill the cell. Their ability to slip through the cell’s membrane makes gold nanoparticles ideal delivery devices for medications to healthy cells or fatal doses of radiation to cancer cells.

However, a single milligram of gold nanoparticles currently costs about $80 (depending on the size of the nanoparticles). That places the price of gold nanoparticles at $80,000 per gram while a gram of pure, raw gold goes for about $50.

“It’s not the gold that’s making it expensive,” Malmstadt [Noah Malmstadt of the USC Viterbi School of Engineering] said. “We can make them, but it’s not like we can cheaply make a 50-gallon drum full of them.”

A fluid situation

At this time, the process of manufacturing a nanoparticle typically involves a technician in a chemistry lab mixing up a batch of chemicals by hand in traditional lab flasks and beakers.

The new technique used by Brutchey [Richard Brutchey of the USC Dornsife College of Letters, Arts and Sciences] and Malmstadt instead relies on microfluidics — technology that manipulates tiny droplets of fluid in narrow channels.

“In order to go large scale, we have to go small,” Brutchey said.

Really small.

The team 3-D printed tubes about 250 micrometers in diameter, which they believe to be the smallest, fully enclosed 3-D printed tubes anywhere. For reference, your average-sized speck of dust is 50 micrometers wide.

They they built a parallel network of four of these tubes, side-by-side, and ran a combination of two nonmixing fluids (like oil and water) through them. As the two fluids fought to get out through the openings, they squeezed off tiny droplets. Each of these droplets acted as a micro-scale chemical reactor in which materials were mixed and nanoparticles were generated. Each microfluidic tube can create millions of identical droplets that perform the same reaction.

This sort of system has been envisioned in the past, but it hasn’t been able to be scaled up because the parallel structure meant that if one tube got jammed, it would cause a ripple effect of changing pressures along its neighbors, knocking out the entire system. Think of it like losing a single Christmas light in one of the old-style strands — lose one and you lose them all.

Brutchey and Malmstadt bypassed this problem by altering the geometry of the tubes themselves, shaping the junction between the tubes such that the particles come out a uniform size and the system is immune to pressure changes.

Here’s a link to and a citation for the paper,

Flow invariant droplet formation for stable parallel microreactors by Carson T. Riche, Emily J. Roberts, Malancha Gupta, Richard L. Brutchey & Noah Malmstadt. Nature Communications 7, Article number: 10780 doi:10.1038/ncomms10780 Published 23 February 2016

This is an open access paper.

Handling massive digital datasets the quantum way

A Jan. 25, 2016 news item on phys.org describes a new approach to analyzing and managing huge datasets,

From gene mapping to space exploration, humanity continues to generate ever-larger sets of data—far more information than people can actually process, manage, or understand.

Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way.

Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at [Massachusetts Institute of Technology] MIT, the University of Waterloo, and the University of Southern California [USC}.

A Jan. 25, 2016 MIT news release (*also on EurekAlert*), which originated the news item, describes the theory in more detail,

… Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world.

In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd [ explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.”

It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.”

Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations.

Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.

“That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd.

“Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says.

There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says.

The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” says Lloyd, who holds a joint appointment as a professor of physics. But the limits of classical computation have prevented such approaches from being applied before.

While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.”

Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.”

Here’s a link to and a citation for the paper,

Quantum algorithms for topological and geometric analysis of data by Seth Lloyd, Silvano Garnerone, & Paolo Zanardi. Nature Communications 7, Article number: 10138 doi:10.1038/ncomms10138 Published 25 January 2016

This paper is open access.

ETA Jan. 25, 2016 1245 hours PST,

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

*’also on EurekAlert’ text and link added Jan. 26, 2016.

D-Wave upgrades Google’s quantum computing capabilities

Vancouver-based (more accurately, Burnaby-based) D-Wave systems has scored a coup as key customers have upgraded from a 512-qubit system to a system with over 1,000 qubits. (The technical breakthrough and concomitant interest from the business community was mentioned here in a June 26, 2015 posting.) As for the latest business breakthrough, here’s more from a Sept. 28, 2015 D-Wave press release,

D-Wave Systems Inc., the world’s first quantum computing company, announced that it has entered into a new agreement covering the installation of a succession of D-Wave systems located at NASA’s Ames Research Center in Moffett Field, California. This agreement supports collaboration among Google, NASA and USRA (Universities Space Research Association) that is dedicated to studying how quantum computing can advance artificial intelligence and machine learning, and the solution of difficult optimization problems. The new agreement enables Google and its partners to keep their D-Wave system at the state-of-the-art for up to seven years, with new generations of D-Wave systems to be installed at NASA Ames as they become available.

“The new agreement is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” said D-Wave CEO Vern Brownell. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”

Cade Wetz’s Sept. 28, 2015 article for Wired magazine provides some interesting observations about D-Wave computers along with some explanations of quantum computing (Note: Links have been removed),

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California [USC] have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

D-Wave claims to have a found a solution to the decoherence problem and that appears to be borne out by the USC researchers. Still, it isn’t a general quantum computer (from Wetz’s article),

… researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

It takes a lot of innovation before you make big strides forward and I think D-Wave is to be congratulated on producing what is to my knowledge the only commercially available form of quantum computing of any sort in the world.

ETA Oct. 6, 2015* at 1230 hours PST: Minutes after publishing about D-Wave I came across this item (h/t Quirks & Quarks twitter) about Australian researchers and their quantum computing breakthrough. From an Oct. 6, 2015 article by Hannah Francis for the Sydney (Australia) Morning Herald,

For decades scientists have been trying to turn quantum computing — which allows for multiple calculations to happen at once, making it immeasurably faster than standard computing — into a practical reality rather than a moonshot theory. Until now, they have largely relied on “exotic” materials to construct quantum computers, making them unsuitable for commercial production.

But researchers at the University of New South Wales have patented a new design, published in the scientific journal Nature on Tuesday, created specifically with computer industry manufacturing standards in mind and using affordable silicon, which is found in regular computer chips like those we use every day in smartphones or tablets.

“Our team at UNSW has just cleared a major hurdle to making quantum computing a reality,” the director of the university’s Australian National Fabrication Facility, Andrew Dzurak, the project’s leader, said.

“As well as demonstrating the first quantum logic gate in silicon, we’ve also designed and patented a way to scale this technology to millions of qubits using standard industrial manufacturing techniques to build the world’s first quantum processor chip.”

According to the article, the university is looking for industrial partners to help them exploit this breakthrough. Fisher’s article features an embedded video, as well as, more detail.

*It was Oct. 6, 2015 in Australia but Oct. 5, 2015 my side of the international date line.

ETA Oct. 6, 2015 (my side of the international date line): An Oct. 5, 2015 University of New South Wales news release on EurekAlert provides additional details.

Here’s a link to and a citation for the paper,

A two-qubit logic gate in silicon by M. Veldhorst, C. H. Yang, J. C. C. Hwang, W. Huang,    J. P. Dehollain, J. T. Muhonen, S. Simmons, A. Laucht, F. E. Hudson, K. M. Itoh, A. Morello    & A. S. Dzurak. Nature (2015 doi:10.1038/nature15263 Published online 05 October 2015

This paper is behind a paywall.

Replace silicon with black phosphorus instead of graphene?

I have two black phosphorus pieces. This first piece of research comes out of ‘La belle province’ or, as it’s more usually called, Québec (Canada).

Foundational research on phosphorene

There’s a lot of interest in replacing silicon for a number of reasons and, increasingly, there’s interest in finding an alternative to graphene.

A July 7, 2015 news item on Nanotechnology Now describes a new material for use as transistors,

As scientists continue to hunt for a material that will make it possible to pack more transistors on a chip, new research from McGill University and Université de Montréal adds to evidence that black phosphorus could emerge as a strong candidate.

In a study published today in Nature Communications, the researchers report that when electrons move in a phosphorus transistor, they do so only in two dimensions. The finding suggests that black phosphorus could help engineers surmount one of the big challenges for future electronics: designing energy-efficient transistors.

A July 7, 2015 McGill University news release on EurekAlert, which originated the news item, describes the field of 2D materials and the research into black phosphorus and its 2D version, phosperene (analogous to graphite and graphene),

“Transistors work more efficiently when they are thin, with electrons moving in only two dimensions,” says Thomas Szkopek, an associate professor in McGill’s Department of Electrical and Computer Engineering and senior author of the new study. “Nothing gets thinner than a single layer of atoms.”

In 2004, physicists at the University of Manchester in the U.K. first isolated and explored the remarkable properties of graphene — a one-atom-thick layer of carbon. Since then scientists have rushed to to investigate a range of other two-dimensional materials. One of those is black phosphorus, a form of phosphorus that is similar to graphite and can be separated easily into single atomic layers, known as phosphorene.

Phosphorene has sparked growing interest because it overcomes many of the challenges of using graphene in electronics. Unlike graphene, which acts like a metal, black phosphorus is a natural semiconductor: it can be readily switched on and off.

“To lower the operating voltage of transistors, and thereby reduce the heat they generate, we have to get closer and closer to designing the transistor at the atomic level,” Szkopek says. “The toolbox of the future for transistor designers will require a variety of atomic-layered materials: an ideal semiconductor, an ideal metal, and an ideal dielectric. All three components must be optimized for a well designed transistor. Black phosphorus fills the semiconducting-material role.”

The work resulted from a multidisciplinary collaboration among Szkopek’s nanoelectronics research group, the nanoscience lab of McGill Physics Prof. Guillaume Gervais, and the nanostructures research group of Prof. Richard Martel in Université de Montréal’s Department of Chemistry.

To examine how the electrons move in a phosphorus transistor, the researchers observed them under the influence of a magnetic field in experiments performed at the National High Magnetic Field Laboratory in Tallahassee, FL, the largest and highest-powered magnet laboratory in the world. This research “provides important insights into the fundamental physics that dictate the behavior of black phosphorus,” says Tim Murphy, DC Field Facility Director at the Florida facility.

“What’s surprising in these results is that the electrons are able to be pulled into a sheet of charge which is two-dimensional, even though they occupy a volume that is several atomic layers in thickness,” Szkopek says. That finding is significant because it could potentially facilitate manufacturing the material — though at this point “no one knows how to manufacture this material on a large scale.”

“There is a great emerging interest around the world in black phosphorus,” Szkopek says. “We are still a long way from seeing atomic layer transistors in a commercial product, but we have now moved one step closer.”

Here’s a link to and a citation for the paper,

Two-dimensional magnetotransport in a black phosphorus naked quantum well by V. Tayari, N. Hemsworth, I. Fakih, A. Favron, E. Gaufrès, G. Gervais, R. Martel & T. Szkopek. Nature Communications 6, Article number: 7702 doi:10.1038/ncomms8702 Published 07 July 2015

This is an open access paper.

The second piece of research into black phosphorus is courtesy of an international collaboration.

A phosporene transistor

A July 9, 2015 Technical University of Munich (TUM) press release (also on EurekAlert) describes the formation of a phosphorene transistor made possible by the introduction of arsenic,

Chemists at the Technische Universität München (TUM) have now developed a semiconducting material in which individual phosphorus atoms are replaced by arsenic. In a collaborative international effort, American colleagues have built the first field-effect transistors from the new material.

For many decades silicon has formed the basis of modern electronics. To date silicon technology could provide ever tinier transistors for smaller and smaller devices. But the size of silicon transistors is reaching its physical limit. Also, consumers would like to have flexible devices, devices that can be incorporated into clothing and the likes. However, silicon is hard and brittle. All this has triggered a race for new materials that might one day replace silicon.

Black arsenic phosphorus might be such a material. Like graphene, which consists of a single layer of carbon atoms, it forms extremely thin layers. The array of possible applications ranges from transistors and sensors to mechanically flexible semiconductor devices. Unlike graphene, whose electronic properties are similar to those of metals, black arsenic phosphorus behaves like a semiconductor.

The press release goes on to provide more detail about the collaboration and the research,

A cooperation between the Technical University of Munich and the University of Regensburg on the German side and the University of Southern California (USC) and Yale University in the United States has now, for the first time, produced a field effect transistor made of black arsenic phosphorus. The compounds were synthesized by Marianne Koepf at the laboratory of the research group for Synthesis and Characterization of Innovative Materials at the TUM. The field effect transistors were built and characterized by a group headed by Professor Zhou and Dr. Liu at the Department of Electrical Engineering at USC.

The new technology developed at TUM allows the synthesis of black arsenic phosphorus without high pressure. This requires less energy and is cheaper. The gap between valence and conduction bands can be precisely controlled by adjusting the arsenic concentration. “This allows us to produce materials with previously unattainable electronic and optical properties in an energy window that was hitherto inaccessible,” says Professor Tom Nilges, head of the research group for Synthesis and Characterization of Innovative Materials.

Detectors for infrared

With an arsenic concentration of 83 percent the material exhibits an extremely small band gap of only 0.15 electron volts, making it predestined for sensors which can detect long wavelength infrared radiation. LiDAR (Light Detection and Ranging) sensors operate in this wavelength range, for example. They are used, among other things, as distance sensors in automobiles. Another application is the measurement of dust particles and trace gases in environmental monitoring.

A further interesting aspect of these new, two-dimensional semiconductors is their anisotropic electronic and optical behavior. The material exhibits different characteristics along the x- and y-axes in the same plane. To produce graphene like films the material can be peeled off in ultra thin layers. The thinnest films obtained so far are only two atomic layers thick.

Here’s a link to and a citation for the paper,

Black Arsenic–Phosphorus: Layered Anisotropic Infrared Semiconductors with Highly Tunable Compositions and Properties by Bilu Liu, Marianne Köpf, Ahmad N. Abbas, Xiaomu Wang, Qiushi Guo, Yichen Jia, Fengnian Xia, Richard Weihrich, Frederik Bachhuber, Florian Pielnhofer, Han Wang, Rohan Dhall, Stephen B. Cronin, Mingyuan Ge1 Xin Fang, Tom Nilges, and Chongwu Zhou. DOI: 10.1002/adma.201501758 Article first published online: 25 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Dexter Johnson, on his Nanoclast blog (on the Institute for Electrical and Electronics Engineers website), adds more information about black phosphorus and its electrical properties in his July 9, 2015 posting about the Germany/US collaboration (Note: Links have been removed),

Black phosphorus has been around for about 100 years, but recently it has been synthesized as a two-dimensional material—dubbed phosphorene in reference to its two-dimensional cousin, graphene. Black phosphorus is quite attractive for electronic applications like field-effect transistors because of its inherent band gap and it is one of the few 2-D materials to be a natively p-type semiconductor.

One final comment, I notice the Germany-US work was published weeks prior to the Canadian research suggesting that the TUM July 9, 2015 press release is an attempt to capitalize on the interest generated by the Canadian research. That’s a smart move.

What is a buckybomb?

I gather buckybombs have something to do with cancer treatments. From a March 18, 2015 news item on ScienceDaily,

In 1996, a trio of scientists won the Nobel Prize for Chemistry for their discovery of Buckminsterfullerene — soccer-ball-shaped spheres of 60 joined carbon atoms that exhibit special physical properties.

Now, 20 years later, scientists have figured out how to turn them into Buckybombs.

These nanoscale explosives show potential for use in fighting cancer, with the hope that they could one day target and eliminate cancer at the cellular level — triggering tiny explosions that kill cancer cells with minimal impact on surrounding tissue.

“Future applications would probably use other types of carbon structures — such as carbon nanotubes, but we started with Bucky-balls because they’re very stable, and a lot is known about them,” said Oleg V. Prezhdo, professor of chemistry at the USC [University of Southern California] Dornsife College of Letters, Arts and Sciences and corresponding author of a paper on the new explosives that was published in The Journal of Physical Chemistry on February 24 [2015].

A March 19, 2015 USC news release by Robert Perkins, which despite its publication date originated the news item, describes current cancer treatments with carbon nanotubes and this new technique with fullerenes,

Carbon nanotubes, close relatives of Bucky-balls, are used already to treat cancer. They can be accumulated in cancer cells and heated up by a laser, which penetrates through surrounding tissues without affecting them and directly targets carbon nanotubes. Modifying carbon nanotubes the same way as the Buckybombs will make the cancer treatment more efficient — reducing the amount of treatment needed, Prezhdo said.

To build the miniature explosives, Prezhdo and his colleagues attached 12 nitrous oxide molecules to a single Bucky-ball and then heated it. Within picoseconds, the Bucky-ball disintegrated — increasing temperature by thousands of degrees in a controlled explosion.

The source of the explosion’s power is the breaking of powerful carbon bonds, which snap apart to bond with oxygen from the nitrous oxide, resulting in the creation of carbon dioxide, Prezhdo said.

I’m glad this technique would make treatment more effective but I do pause at the thought of having exploding buckyballs in my body or, for that matter, anyone else’s.

The research was highlighted earlier this month in a March 5, 2015 article by Lisa Zynga for phys.org,

The buckybomb combines the unique properties of two classes of materials: carbon structures and energetic nanomaterials. Carbon materials such as C60 can be chemically modified fairly easily to change their properties. Meanwhile, NO2 groups are known to contribute to detonation and combustion processes because they are a major source of oxygen. So, the scientists wondered what would happen if NO2 groups were attached to C60 molecules: would the whole thing explode? And how?

The simulations answered these questions by revealing the explosion in step-by-step detail. Starting with an intact buckybomb (technically called dodecanitrofullerene, or C60(NO2)12), the researchers raised the simulated temperature to 1000 K (700 °C). Within a picosecond (10-12 second), the NO2 groups begin to isomerize, rearranging their atoms and forming new groups with some of the carbon atoms from the C60. As a few more picoseconds pass, the C60 structure loses some of its electrons, which interferes with the bonds that hold it together, and, in a flash, the large molecule disintegrates into many tiny pieces of diatomic carbon (C2). What’s left is a mixture of gases including CO2, NO2, and N2, as well as C2.

I encourage you to read Zynga’s article in whole as she provides more scientific detail and she notes that this discovery could have applications for the military and for industry.

Here’s a link to and a citation for the researchers’ paper,

Buckybomb: Reactive Molecular Dynamics Simulation by Vitaly V. Chaban, Eudes Eterno Fileti, and Oleg V. Prezhdo. J. Phys. Chem. Lett., 2015, 6 (5), pp 913–917 DOI: 10.1021/acs.jpclett.5b00120 Publication Date (Web): February 24, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

More investment money for Canada’s D-Wave Systems (quantum computing)

A Feb. 2, 2015 news item on Nanotechnology Now features D-Wave Systems (located in the Vancouver region, Canada) and its recent funding bonanza of $28M dollars,

Harris & Harris Group, Inc. (Nasdaq:TINY), an investor in transformative companies enabled by disruptive science, notes the announcement by portfolio company, D-Wave Systems, Inc., that it has closed $29 million (CAD) in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million (CAD) raised in 2014. Harris & Harris Group’s total investment in D-Wave is approximately $5.8 million (USD). D-Wave’s announcement also includes highlights of 2014, a year of strong growth and advancement for D-Wave.

A Jan. 29, 2015 D-Wave news release provides more details about the new investment and D-Wave’s 2014 triumphs,

D-Wave Systems Inc., the world’s first quantum computing company, today announced that it has closed $29 million in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million raised in 2014.

“The investment is a testament to the progress D-Wave continues to make as the leader in quantum computing systems,” said Vern Brownell, CEO of D-Wave. “The funding we received in 2014 will advance our quantum hardware and software development, as well as our work on leading edge applications of our systems. By making quantum computing available to more organizations, we’re driving our goal of finding solutions to the most complex optimization and machine learning applications in national defense, computing, research and finance.”

The funding follows a year of strong growth and advancement for D-Wave. Highlights include:

•    Significant progress made towards the release of the next D-Wave quantum system featuring a 1000 qubit processor, which is currently undergoing testing in D-Wave’s labs.
•    The company’s patent portfolio grew to over 150 issued patents worldwide, with 11 new U.S. patents being granted in 2014, covering aspects of D-Wave’s processor technology, systems and techniques for solving computational problems using D-Wave’s technology.
•    D-Wave Professional Services launched, providing quantum computing experts to collaborate directly with customers, and deliver training classes on the usage and programming of the D-Wave system to a number of national laboratories, businesses and universities.
•    Partnerships were established with DNA-SEQ and 1QBit, companies that are developing quantum software applications in the spheres of medicine and finance, respectively.
•    Research throughout the year continued to validate D-Wave’s work, including a study showing further evidence of quantum entanglement by D-Wave and USC  [University of Southern California] scientists, published in Physical Review X this past May.

Since 2011, some of the most prestigious organizations in the world, including Lockheed Martin, NASA, Google, USC and the Universities Space Research Association (USRA), have partnered with D-Wave to use their quantum computing systems. In 2015, these partners will continue to work with the D-Wave computer, conducting pioneering research in machine learning, optimization, and space exploration.

D-Wave, which already employs over 120 people, plans to expand hiring with the additional funding. Key areas of growth include research, processor and systems development and software engineering.

Harris & Harris Group offers a description of D-Wave which mentions nanotechnology and hosts a couple of explanatory videos,

D-Wave Systems develops an adiabatic quantum computer (QC).

Status
Privately Held

The Market
Electronics – High Performance Computing

The Problem
Traditional or “classical computers” are constrained by the sequential character of data processing that makes the solving of non-polynomial (NP)-hard problems difficult or potentially impossible in reasonable timeframes. These types of computationally intense problems are commonly observed in software verifications, scheduling and logistics planning, integer programming, bioinformatics and financial portfolio optimization.

D-Wave’s Solution
D-Wave develops quantum computers that are capable of processing data quantum mechanical properties of matter. This leverage of quantum mechanics enables the identification of solutions to some non-polynomial (NP)-hard problems in a reasonable timeframe, instead of the exponential time needed for any classical digital computer. D-Wave sold and installed its first quantum computing system to a commercial customer in 2011.

Nanotechnology Factor
To function properly, D-wave processor requires tight control and manipulation of quantum mechanical phenomena. This control and manipulation is achieved by creating integrated circuits based on Josephson Junctions and other superconducting circuitry. By picking superconductors, D-wave managed to combine quantum mechanical behavior with macroscopic dimensions needed for hi-yield design and manufacturing.

It seems D-Wave has made some research and funding strides since I last wrote about the company in a Jan. 19, 2012 posting, although there is no mention of quantum computer sales.

‘Touching’ infrared light, if you’re a rat followed by announcement of US FDA approval of first commercial artificial retina (bionic eye)

Researcher Miguel Nicolelis and his colleagues at Duke University have implanted a neuroprosthetic device in the portion of a rat’s brain related to touch that allows the rats to see infrared light. From the Feb. 12, 2013 news release on EurekAlert,

Researchers have given rats the ability to “touch” infrared light, normally invisible to them, by fitting them with an infrared detector wired to microscopic electrodes implanted in the part of the mammalian brain that processes tactile information. The achievement represents the first time a brain-machine interface has augmented a sense in adult animals, said Duke University neurobiologist Miguel Nicolelis, who led the research team.

The experiment also demonstrated for the first time that a novel sensory input could be processed by a cortical region specialized in another sense without “hijacking” the function of this brain area said Nicolelis. This discovery suggests, for example, that a person whose visual cortex was damaged could regain sight through a neuroprosthesis implanted in another cortical region, he said.

Although the initial experiments tested only whether rats could detect infrared light, there seems no reason that these animals in the future could not be given full-fledged infrared vision, said Nicolelis. For that matter, cortical neuroprostheses could be developed to give animals or humans the ability to see in any region of the electromagnetic spectrum, or even magnetic fields. “We could create devices sensitive to any physical energy,” he said. “It could be magnetic fields, radio waves, or ultrasound. We chose infrared initially because it didn’t interfere with our electrophysiological recordings.”

Interestingly, the research was supported by the US National Institute of Mental Health (as per the news release).

The researchers have more to say about what they’re doing,

“The philosophy of the field of brain-machine interfaces has until now been to attempt to restore a motor function lost to lesion or damage of the central nervous system,” said Thomson, [Eric Thomson] first author of the study. “This is the first paper in which a neuroprosthetic device was used to augment function—literally enabling a normal animal to acquire a sixth sense.”

Here’s how they conducted the research,

The mammalian retina is blind to infrared light, and mammals cannot detect any heat generated by the weak infrared light used in the studies. In their experiments, the researchers used a test chamber that contained three light sources that could be switched on randomly. Using visible LED lights, they first taught each rat to choose the active light source by poking its nose into an attached port to receive a reward of a sip of water.

After training the rats, the researchers implanted in their brains an array of stimulating microelectrodes, each roughly a tenth the diameter of a human hair. The microelectrodes were implanted in the cortical region that processes touch information from the animals’ facial whiskers.

Attached to the microelectrodes was an infrared detector affixed to the animals’ foreheads. The system was programmed so that orientation toward an infrared light would trigger an electrical signal to the brain. The signal pulses increased in frequency with the intensity and proximity of the light.

The researchers returned the animals to the test chamber, gradually replacing the visible lights with infrared lights. At first in infrared trials, when a light was switched on the animals would tend to poke randomly at the reward ports and scratch at their faces, said Nicolelis. This indicated that they were initially interpreting the brain signals as touch. However, over about a month, the animals learned to associate the brain signal with the infrared source. They began to actively “forage” for the signal, sweeping their heads back and forth to guide themselves to the active light source. Ultimately, they achieved a near-perfect score in tracking and identifying the correct location of the infrared light source.

To ensure that the animals were really using the infrared detector and not their eyes to sense the infrared light, the researchers conducted trials in which the light switched on, but the detector sent no signal to the brain. In these trials, the rats did not react to the infrared light.

Their finding could have an impact on notions of mammalian brain plasticity,

A key finding, said Nicolelis, was that enlisting the touch cortex for light detection did not reduce its ability to process touch signals. “When we recorded signals from the touch cortex of these animals, we found that although the cells had begun responding to infrared light, they continued to respond to whisker touch. It was almost like the cortex was dividing itself evenly so that the neurons could process both types of information.

This finding of brain plasticity is in contrast with the “optogenetic” approach to brain stimulation, which holds that a particular neuronal cell type should be stimulated to generate a desired neurological function. Rather, said Nicolelis, the experiments demonstrate that a broad electrical stimulation, which recruits many distinct cell types, can drive a cortical region to adapt to a new source of sensory input.

All of this work is part of Nicolelis’ larger project ‘Walk Again’ which is mentioned in my March 16, 2012 posting and includes a reference to some ethical issues raised by the work. Briefly, Nicolelis and an international team of collaborators are developing a brain-machine interface that will enable full mobility for people who are severely paralyzed. From the news release,

The Walk Again Project has recently received a $20 million grant from FINEP, a Brazilian research funding agency to allow the development of the first brain-controlled whole body exoskeleton aimed at restoring mobility in severely paralyzed patients. A first demonstration of this technology is expected to happen in the opening game of the 2014 Soccer World Cup in Brazil.

Expanding sensory abilities could also enable a new type of feedback loop to improve the speed and accuracy of such exoskeletons, said Nicolelis. For example, while researchers are now seeking to use tactile feedback to allow patients to feel the movements produced by such “robotic vests,” the feedback could also be in the form of a radio signal or infrared light that would give the person information on the exoskeleton limb’s position and encounter with objects.

There’s more information including videos about the work with infrared light and rats at the Nicolelis Lab website.  Here’s a citation for and link to the team’s research paper,

Perceiving invisible light through a somatosensory cortical prosthesis by Eric E. Thomson, Rafael Carra, & Miguel A.L. Nicolelis. Nature Communications Published 12 Feb 2013 DOI: 10.1038/ncomms2497

Meanwhile, the US Food and Drug Administraton (FDA) has approved the first commercial artificial retina, from the Feb. 14, 2013 news release,

The U.S. Food and Drug Administration (FDA) granted market approval to an artificial retina technology today, the first bionic eye to be approved for patients in the United States. The prosthetic technology was developed in part with support from the National Science Foundation (NSF).

The device, called the Argus® II Retinal Prosthesis System, transmits images from a small, eye-glass-mounted camera wirelessly to a microelectrode array implanted on a patient’s damaged retina. The array sends electrical signals via the optic nerve, and the brain interprets a visual image.

The FDA approval currently applies to individuals who have lost sight as a result of severe to profound retinitis pigmentosa (RP), an ailment that affects one in every 4,000 Americans. The implant allows some individuals with RP, who are completely blind, to locate objects, detect movement, improve orientation and mobility skills and discern shapes such as large letters.

The Argus II is manufactured by, and will be distributed by, Second Sight Medical Products of Sylmar, Calif., which is part of the team of scientists and engineers from the university, federal and private sectors who spent nearly two decades developing the system with public and private investment.

Scientists are often compelled to do research in an area inspired by family,

“Seeing my grandmother go blind motivated me to pursue ophthalmology and biomedical engineering to develop a treatment for patients for whom there was no foreseeable cure,” says the technology’s co-developer, Mark Humayun, associate director of research at the Doheny Eye Institute at the University of Southern California and director of the NSF Engineering Research Center for Biomimetic MicroElectronic Systems (BMES). …”

There’s also been considerable government investment,

The effort by Humayun and his colleagues has received early and continuing support from NSF, the National Institutes of Health and the Department of Energy, with grants totaling more than $100 million. The private sector’s support nearly matched that of the federal government.

“The retinal implant exemplifies how NSF grants for high-risk, fundamental research can directly result in ground-breaking technologies decades later,” said Acting NSF Assistant Director for Engineering Kesh Narayanan. “In collaboration with the Second Sight team and the courageous patients who volunteered to have experimental surgery to implant the first-generation devices, the researchers of NSF’s Biomimetic MicroElectronic Systems Engineering Research Center are developing technologies that may ultimately have as profound an impact on blindness as the cochlear implant has had for hearing loss.”

Leaving aside controversies about cochlear implants and the possibility of such controversies with artificial retinas (bionic eyes), it’s interesting to note that this device is dependent on an external camera,

The researchers’ efforts have bridged cellular biology–necessary for understanding how to stimulate the retinal ganglion cells without permanent damage–with microelectronics, which led to the miniaturized, low-power integrated chip for performing signal conversion, conditioning and stimulation functions. The hardware was paired with software processing and tuning algorithms that convert visual imagery to stimulation signals, and the entire system had to be incorporated within hermetically sealed packaging that allowed the electronics to operate in the vitreous fluid of the eye indefinitely. Finally, the research team had to develop new surgical techniques in order to integrate the device with the body, ensuring accurate placement of the stimulation electrodes on the retina.

“The artificial retina is a great engineering challenge under the interdisciplinary constraint of biology, enabling technology, regulatory compliance, as well as sophisticated design science,” adds Liu.  [Wentai Liu of the University of California, Los Angeles] “The artificial retina provides an interface between biotic and abiotic systems. Its unique design characteristics rely on system-level optimization, rather than the more common practice of component optimization, to achieve miniaturization and integration. Using the most advanced semiconductor technology, the engine for the artificial retina is a ‘system on a chip’ of mixed voltages and mixed analog-digital design, which provides self-contained power and data management and other functionality. This design for the artificial retina facilitates both surgical procedures and regulatory compliance.”

The Argus II design consists of an external video camera system matched to the implanted retinal stimulator, which contains a microelectrode array that spans 20 degrees of visual field. [emphasis mine] …

“The external camera system-built into a pair of glasses-streams video to a belt-worn computer, which converts the video into stimulus commands for the implant,” says Weiland [USC researcher Jim Weiland], “The belt-worn computer encodes the commands into a wireless signal that is transmitted to the implant, which has the necessary electronics to receive and decode both wireless power and data. Based on those data, the implant stimulates the retina with small electrical pulses. The electronics are hermetically packaged and the electrical stimulus is delivered to the retina via a microelectrode array.”

You can see some footage of people using artificial retinas in the context of Grégoire Cosendai’s TEDx Vienna presentation. As I noted in my Aug. 18, 2011 posting where this talk and developments in human enhancement are mentioned, the relevant material can be seen at approximately 13 mins., 25 secs. in Cosendai’s talk.

Second Sight Medical Devices can be found here.

Clone your carbon nanotubes

The Nov. 14, 2012 news release on EurekAlert highlights some work on a former nanomaterial superstar, carbon nanotubes,

Scientists and industry experts have long speculated that carbon nanotube transistors would one day replace their silicon predecessors. In 1998, Delft University built the world’s first carbon nanotube transistors – carbon nanotubes have the potential to be far smaller, faster, and consume less power than silicon transistors.

A key reason carbon nanotubes are not in your computer right now is that they are difficult to manufacture in a predictable way. Scientists have had a difficult time controlling the manufacture of nanotubes to the correct diameter, type and ultimately chirality, factors that control nanotubes’ electrical and mechanical properties.

Carbon nanotubes are typically grown using a chemical vapor deposition (CVD) system in which a chemical-laced gas is pumped into a chamber containing substrates with metal catalyst nanoparticles, upon which the nanotubes grow. It is generally believed that the diameters of the nanotubes are determined by the size of the catalytic metal nanoparticles. However, attempts to control the catalysts in hopes of achieving chirality-controlled nanotube growth have not been successful.

The USC [University of Southern California] team’s innovation was to jettison the catalyst and instead plant pieces of carbon nanotubes that have been separated and pre-selected based on chirality, using a nanotube separation technique developed and perfected by Zheng [Ming Zheng] and his coworkers at NIST [US National Institute of Standards and Technology]. Using those pieces as seeds, the team used chemical vapor deposition to extend the seeds to get much longer nanotubes, which were shown to have the same chirality as the seeds..

The process is referred to as “nanotube cloning.” The next steps in the research will be to carefully study the mechanism of the nanotube growth in this system, to scale up the cloning process to get large quantities of chirality-controlled nanotubes, and to use those nanotubes for electronic applications

H/T to ScienceDaily’s Nov. 14, 2012 news item for the full journal reference,

Jia Liu, Chuan Wang, Xiaomin Tu, Bilu Liu, Liang Chen, Ming Zheng, Chongwu Zhou. Chirality-controlled synthesis of single-wall carbon nanotubes using vapour-phase epitaxy. Nat. Commun., 13 Nov, 2012 DOI: 10.1038/ncomms2205

The article is behind a paywall.