Tag Archives: USC

Mass production of nanoparticles?

With all the years of nanotechnology and nanomaterials research it seems strange that mass production of nanoparticles is still very much in the early stages as a Feb. 24, 2016 news item on phys.org points out,

Nanoparticles – tiny particles 100,000 times smaller than the width of a strand of hair – can be found in everything from drug delivery formulations to pollution controls on cars to HD TV sets. With special properties derived from their tiny size and subsequently increased surface area, they’re critical to industry and scientific research.

They’re also expensive and tricky to make.

Now, researchers at USC [University of Southern California] have created a new way to manufacture nanoparticles that will transform the process from a painstaking, batch-by-batch drudgery into a large-scale, automated assembly line.

A Feb. 24, 2016 USC news release (also on EurekAlert) by Robert Perkins, which originated the news item, offers additional insight,

Consider, for example, gold nanoparticles. They have been shown to easily penetrate cell membranes without causing any damage — an unusual feat given that most penetrations of cell membranes by foreign objects can damage or kill the cell. Their ability to slip through the cell’s membrane makes gold nanoparticles ideal delivery devices for medications to healthy cells or fatal doses of radiation to cancer cells.

However, a single milligram of gold nanoparticles currently costs about $80 (depending on the size of the nanoparticles). That places the price of gold nanoparticles at $80,000 per gram while a gram of pure, raw gold goes for about $50.

“It’s not the gold that’s making it expensive,” Malmstadt [Noah Malmstadt of the USC Viterbi School of Engineering] said. “We can make them, but it’s not like we can cheaply make a 50-gallon drum full of them.”

A fluid situation

At this time, the process of manufacturing a nanoparticle typically involves a technician in a chemistry lab mixing up a batch of chemicals by hand in traditional lab flasks and beakers.

The new technique used by Brutchey [Richard Brutchey of the USC Dornsife College of Letters, Arts and Sciences] and Malmstadt instead relies on microfluidics — technology that manipulates tiny droplets of fluid in narrow channels.

“In order to go large scale, we have to go small,” Brutchey said.

Really small.

The team 3-D printed tubes about 250 micrometers in diameter, which they believe to be the smallest, fully enclosed 3-D printed tubes anywhere. For reference, your average-sized speck of dust is 50 micrometers wide.

They they built a parallel network of four of these tubes, side-by-side, and ran a combination of two nonmixing fluids (like oil and water) through them. As the two fluids fought to get out through the openings, they squeezed off tiny droplets. Each of these droplets acted as a micro-scale chemical reactor in which materials were mixed and nanoparticles were generated. Each microfluidic tube can create millions of identical droplets that perform the same reaction.

This sort of system has been envisioned in the past, but it hasn’t been able to be scaled up because the parallel structure meant that if one tube got jammed, it would cause a ripple effect of changing pressures along its neighbors, knocking out the entire system. Think of it like losing a single Christmas light in one of the old-style strands — lose one and you lose them all.

Brutchey and Malmstadt bypassed this problem by altering the geometry of the tubes themselves, shaping the junction between the tubes such that the particles come out a uniform size and the system is immune to pressure changes.

Here’s a link to and a citation for the paper,

Flow invariant droplet formation for stable parallel microreactors by Carson T. Riche, Emily J. Roberts, Malancha Gupta, Richard L. Brutchey & Noah Malmstadt. Nature Communications 7, Article number: 10780 doi:10.1038/ncomms10780 Published 23 February 2016

This is an open access paper.

Handling massive digital datasets the quantum way

A Jan. 25, 2016 news item on phys.org describes a new approach to analyzing and managing huge datasets,

From gene mapping to space exploration, humanity continues to generate ever-larger sets of data—far more information than people can actually process, manage, or understand.

Machine learning systems can help researchers deal with this ever-growing flood of information. Some of the most powerful of these analytical tools are based on a strange branch of geometry called topology, which deals with properties that stay the same even when something is bent and stretched every which way.

Such topological systems are especially useful for analyzing the connections in complex networks, such as the internal wiring of the brain, the U.S. power grid, or the global interconnections of the Internet. But even with the most powerful modern supercomputers, such problems remain daunting and impractical to solve. Now, a new approach that would use quantum computers to streamline these problems has been developed by researchers at [Massachusetts Institute of Technology] MIT, the University of Waterloo, and the University of Southern California [USC}.

A Jan. 25, 2016 MIT news release (*also on EurekAlert*), which originated the news item, describes the theory in more detail,

… Seth Lloyd, the paper’s lead author and the Nam P. Suh Professor of Mechanical Engineering, explains that algebraic topology is key to the new method. This approach, he says, helps to reduce the impact of the inevitable distortions that arise every time someone collects data about the real world.

In a topological description, basic features of the data (How many holes does it have? How are the different parts connected?) are considered the same no matter how much they are stretched, compressed, or distorted. Lloyd [ explains that it is often these fundamental topological attributes “that are important in trying to reconstruct the underlying patterns in the real world that the data are supposed to represent.”

It doesn’t matter what kind of dataset is being analyzed, he says. The topological approach to looking for connections and holes “works whether it’s an actual physical hole, or the data represents a logical argument and there’s a hole in the argument. This will find both kinds of holes.”

Using conventional computers, that approach is too demanding for all but the simplest situations. Topological analysis “represents a crucial way of getting at the significant features of the data, but it’s computationally very expensive,” Lloyd says. “This is where quantum mechanics kicks in.” The new quantum-based approach, he says, could exponentially speed up such calculations.

Lloyd offers an example to illustrate that potential speedup: If you have a dataset with 300 points, a conventional approach to analyzing all the topological features in that system would require “a computer the size of the universe,” he says. That is, it would take 2300 (two to the 300th power) processing units — approximately the number of all the particles in the universe. In other words, the problem is simply not solvable in that way.

“That’s where our algorithm kicks in,” he says. Solving the same problem with the new system, using a quantum computer, would require just 300 quantum bits — and a device this size may be achieved in the next few years, according to Lloyd.

“Our algorithm shows that you don’t need a big quantum computer to kick some serious topological butt,” he says.

There are many important kinds of huge datasets where the quantum-topological approach could be useful, Lloyd says, for example understanding interconnections in the brain. “By applying topological analysis to datasets gleaned by electroencephalography or functional MRI, you can reveal the complex connectivity and topology of the sequences of firing neurons that underlie our thought processes,” he says.

The same approach could be used for analyzing many other kinds of information. “You could apply it to the world’s economy, or to social networks, or almost any system that involves long-range transport of goods or information,” says Lloyd, who holds a joint appointment as a professor of physics. But the limits of classical computation have prevented such approaches from being applied before.

While this work is theoretical, “experimentalists have already contacted us about trying prototypes,” he says. “You could find the topology of simple structures on a very simple quantum computer. People are trying proof-of-concept experiments.”

Ignacio Cirac, a professor at the Max Planck Institute of Quantum Optics in Munich, Germany, who was not involved in this research, calls it “a very original idea, and I think that it has a great potential.” He adds “I guess that it has to be further developed and adapted to particular problems. In any case, I think that this is top-quality research.”

Here’s a link to and a citation for the paper,

Quantum algorithms for topological and geometric analysis of data by Seth Lloyd, Silvano Garnerone, & Paolo Zanardi. Nature Communications 7, Article number: 10138 doi:10.1038/ncomms10138 Published 25 January 2016

This paper is open access.

ETA Jan. 25, 2016 1245 hours PST,

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

Shown here are the connections between different regions of the brain in a control subject (left) and a subject under the influence of the psychedelic compound psilocybin (right). This demonstrates a dramatic increase in connectivity, which explains some of the drug’s effects (such as “hearing” colors or “seeing” smells). Such an analysis, involving billions of brain cells, would be too complex for conventional techniques, but could be handled easily by the new quantum approach, the researchers say. Courtesy of the researchers

*’also on EurekAlert’ text and link added Jan. 26, 2016.

D-Wave upgrades Google’s quantum computing capabilities

Vancouver-based (more accurately, Burnaby-based) D-Wave systems has scored a coup as key customers have upgraded from a 512-qubit system to a system with over 1,000 qubits. (The technical breakthrough and concomitant interest from the business community was mentioned here in a June 26, 2015 posting.) As for the latest business breakthrough, here’s more from a Sept. 28, 2015 D-Wave press release,

D-Wave Systems Inc., the world’s first quantum computing company, announced that it has entered into a new agreement covering the installation of a succession of D-Wave systems located at NASA’s Ames Research Center in Moffett Field, California. This agreement supports collaboration among Google, NASA and USRA (Universities Space Research Association) that is dedicated to studying how quantum computing can advance artificial intelligence and machine learning, and the solution of difficult optimization problems. The new agreement enables Google and its partners to keep their D-Wave system at the state-of-the-art for up to seven years, with new generations of D-Wave systems to be installed at NASA Ames as they become available.

“The new agreement is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” said D-Wave CEO Vern Brownell. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”

Cade Wetz’s Sept. 28, 2015 article for Wired magazine provides some interesting observations about D-Wave computers along with some explanations of quantum computing (Note: Links have been removed),

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California [USC] have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

D-Wave claims to have a found a solution to the decoherence problem and that appears to be borne out by the USC researchers. Still, it isn’t a general quantum computer (from Wetz’s article),

… researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

It takes a lot of innovation before you make big strides forward and I think D-Wave is to be congratulated on producing what is to my knowledge the only commercially available form of quantum computing of any sort in the world.

ETA Oct. 6, 2015* at 1230 hours PST: Minutes after publishing about D-Wave I came across this item (h/t Quirks & Quarks twitter) about Australian researchers and their quantum computing breakthrough. From an Oct. 6, 2015 article by Hannah Francis for the Sydney (Australia) Morning Herald,

For decades scientists have been trying to turn quantum computing — which allows for multiple calculations to happen at once, making it immeasurably faster than standard computing — into a practical reality rather than a moonshot theory. Until now, they have largely relied on “exotic” materials to construct quantum computers, making them unsuitable for commercial production.

But researchers at the University of New South Wales have patented a new design, published in the scientific journal Nature on Tuesday, created specifically with computer industry manufacturing standards in mind and using affordable silicon, which is found in regular computer chips like those we use every day in smartphones or tablets.

“Our team at UNSW has just cleared a major hurdle to making quantum computing a reality,” the director of the university’s Australian National Fabrication Facility, Andrew Dzurak, the project’s leader, said.

“As well as demonstrating the first quantum logic gate in silicon, we’ve also designed and patented a way to scale this technology to millions of qubits using standard industrial manufacturing techniques to build the world’s first quantum processor chip.”

According to the article, the university is looking for industrial partners to help them exploit this breakthrough. Fisher’s article features an embedded video, as well as, more detail.

*It was Oct. 6, 2015 in Australia but Oct. 5, 2015 my side of the international date line.

ETA Oct. 6, 2015 (my side of the international date line): An Oct. 5, 2015 University of New South Wales news release on EurekAlert provides additional details.

Here’s a link to and a citation for the paper,

A two-qubit logic gate in silicon by M. Veldhorst, C. H. Yang, J. C. C. Hwang, W. Huang,    J. P. Dehollain, J. T. Muhonen, S. Simmons, A. Laucht, F. E. Hudson, K. M. Itoh, A. Morello    & A. S. Dzurak. Nature (2015 doi:10.1038/nature15263 Published online 05 October 2015

This paper is behind a paywall.

Replace silicon with black phosphorus instead of graphene?

I have two black phosphorus pieces. This first piece of research comes out of ‘La belle province’ or, as it’s more usually called, Québec (Canada).

Foundational research on phosphorene

There’s a lot of interest in replacing silicon for a number of reasons and, increasingly, there’s interest in finding an alternative to graphene.

A July 7, 2015 news item on Nanotechnology Now describes a new material for use as transistors,

As scientists continue to hunt for a material that will make it possible to pack more transistors on a chip, new research from McGill University and Université de Montréal adds to evidence that black phosphorus could emerge as a strong candidate.

In a study published today in Nature Communications, the researchers report that when electrons move in a phosphorus transistor, they do so only in two dimensions. The finding suggests that black phosphorus could help engineers surmount one of the big challenges for future electronics: designing energy-efficient transistors.

A July 7, 2015 McGill University news release on EurekAlert, which originated the news item, describes the field of 2D materials and the research into black phosphorus and its 2D version, phosperene (analogous to graphite and graphene),

“Transistors work more efficiently when they are thin, with electrons moving in only two dimensions,” says Thomas Szkopek, an associate professor in McGill’s Department of Electrical and Computer Engineering and senior author of the new study. “Nothing gets thinner than a single layer of atoms.”

In 2004, physicists at the University of Manchester in the U.K. first isolated and explored the remarkable properties of graphene — a one-atom-thick layer of carbon. Since then scientists have rushed to to investigate a range of other two-dimensional materials. One of those is black phosphorus, a form of phosphorus that is similar to graphite and can be separated easily into single atomic layers, known as phosphorene.

Phosphorene has sparked growing interest because it overcomes many of the challenges of using graphene in electronics. Unlike graphene, which acts like a metal, black phosphorus is a natural semiconductor: it can be readily switched on and off.

“To lower the operating voltage of transistors, and thereby reduce the heat they generate, we have to get closer and closer to designing the transistor at the atomic level,” Szkopek says. “The toolbox of the future for transistor designers will require a variety of atomic-layered materials: an ideal semiconductor, an ideal metal, and an ideal dielectric. All three components must be optimized for a well designed transistor. Black phosphorus fills the semiconducting-material role.”

The work resulted from a multidisciplinary collaboration among Szkopek’s nanoelectronics research group, the nanoscience lab of McGill Physics Prof. Guillaume Gervais, and the nanostructures research group of Prof. Richard Martel in Université de Montréal’s Department of Chemistry.

To examine how the electrons move in a phosphorus transistor, the researchers observed them under the influence of a magnetic field in experiments performed at the National High Magnetic Field Laboratory in Tallahassee, FL, the largest and highest-powered magnet laboratory in the world. This research “provides important insights into the fundamental physics that dictate the behavior of black phosphorus,” says Tim Murphy, DC Field Facility Director at the Florida facility.

“What’s surprising in these results is that the electrons are able to be pulled into a sheet of charge which is two-dimensional, even though they occupy a volume that is several atomic layers in thickness,” Szkopek says. That finding is significant because it could potentially facilitate manufacturing the material — though at this point “no one knows how to manufacture this material on a large scale.”

“There is a great emerging interest around the world in black phosphorus,” Szkopek says. “We are still a long way from seeing atomic layer transistors in a commercial product, but we have now moved one step closer.”

Here’s a link to and a citation for the paper,

Two-dimensional magnetotransport in a black phosphorus naked quantum well by V. Tayari, N. Hemsworth, I. Fakih, A. Favron, E. Gaufrès, G. Gervais, R. Martel & T. Szkopek. Nature Communications 6, Article number: 7702 doi:10.1038/ncomms8702 Published 07 July 2015

This is an open access paper.

The second piece of research into black phosphorus is courtesy of an international collaboration.

A phosporene transistor

A July 9, 2015 Technical University of Munich (TUM) press release (also on EurekAlert) describes the formation of a phosphorene transistor made possible by the introduction of arsenic,

Chemists at the Technische Universität München (TUM) have now developed a semiconducting material in which individual phosphorus atoms are replaced by arsenic. In a collaborative international effort, American colleagues have built the first field-effect transistors from the new material.

For many decades silicon has formed the basis of modern electronics. To date silicon technology could provide ever tinier transistors for smaller and smaller devices. But the size of silicon transistors is reaching its physical limit. Also, consumers would like to have flexible devices, devices that can be incorporated into clothing and the likes. However, silicon is hard and brittle. All this has triggered a race for new materials that might one day replace silicon.

Black arsenic phosphorus might be such a material. Like graphene, which consists of a single layer of carbon atoms, it forms extremely thin layers. The array of possible applications ranges from transistors and sensors to mechanically flexible semiconductor devices. Unlike graphene, whose electronic properties are similar to those of metals, black arsenic phosphorus behaves like a semiconductor.

The press release goes on to provide more detail about the collaboration and the research,

A cooperation between the Technical University of Munich and the University of Regensburg on the German side and the University of Southern California (USC) and Yale University in the United States has now, for the first time, produced a field effect transistor made of black arsenic phosphorus. The compounds were synthesized by Marianne Koepf at the laboratory of the research group for Synthesis and Characterization of Innovative Materials at the TUM. The field effect transistors were built and characterized by a group headed by Professor Zhou and Dr. Liu at the Department of Electrical Engineering at USC.

The new technology developed at TUM allows the synthesis of black arsenic phosphorus without high pressure. This requires less energy and is cheaper. The gap between valence and conduction bands can be precisely controlled by adjusting the arsenic concentration. “This allows us to produce materials with previously unattainable electronic and optical properties in an energy window that was hitherto inaccessible,” says Professor Tom Nilges, head of the research group for Synthesis and Characterization of Innovative Materials.

Detectors for infrared

With an arsenic concentration of 83 percent the material exhibits an extremely small band gap of only 0.15 electron volts, making it predestined for sensors which can detect long wavelength infrared radiation. LiDAR (Light Detection and Ranging) sensors operate in this wavelength range, for example. They are used, among other things, as distance sensors in automobiles. Another application is the measurement of dust particles and trace gases in environmental monitoring.

A further interesting aspect of these new, two-dimensional semiconductors is their anisotropic electronic and optical behavior. The material exhibits different characteristics along the x- and y-axes in the same plane. To produce graphene like films the material can be peeled off in ultra thin layers. The thinnest films obtained so far are only two atomic layers thick.

Here’s a link to and a citation for the paper,

Black Arsenic–Phosphorus: Layered Anisotropic Infrared Semiconductors with Highly Tunable Compositions and Properties by Bilu Liu, Marianne Köpf, Ahmad N. Abbas, Xiaomu Wang, Qiushi Guo, Yichen Jia, Fengnian Xia, Richard Weihrich, Frederik Bachhuber, Florian Pielnhofer, Han Wang, Rohan Dhall, Stephen B. Cronin, Mingyuan Ge1 Xin Fang, Tom Nilges, and Chongwu Zhou. DOI: 10.1002/adma.201501758 Article first published online: 25 JUN 2015

© 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Dexter Johnson, on his Nanoclast blog (on the Institute for Electrical and Electronics Engineers website), adds more information about black phosphorus and its electrical properties in his July 9, 2015 posting about the Germany/US collaboration (Note: Links have been removed),

Black phosphorus has been around for about 100 years, but recently it has been synthesized as a two-dimensional material—dubbed phosphorene in reference to its two-dimensional cousin, graphene. Black phosphorus is quite attractive for electronic applications like field-effect transistors because of its inherent band gap and it is one of the few 2-D materials to be a natively p-type semiconductor.

One final comment, I notice the Germany-US work was published weeks prior to the Canadian research suggesting that the TUM July 9, 2015 press release is an attempt to capitalize on the interest generated by the Canadian research. That’s a smart move.

What is a buckybomb?

I gather buckybombs have something to do with cancer treatments. From a March 18, 2015 news item on ScienceDaily,

In 1996, a trio of scientists won the Nobel Prize for Chemistry for their discovery of Buckminsterfullerene — soccer-ball-shaped spheres of 60 joined carbon atoms that exhibit special physical properties.

Now, 20 years later, scientists have figured out how to turn them into Buckybombs.

These nanoscale explosives show potential for use in fighting cancer, with the hope that they could one day target and eliminate cancer at the cellular level — triggering tiny explosions that kill cancer cells with minimal impact on surrounding tissue.

“Future applications would probably use other types of carbon structures — such as carbon nanotubes, but we started with Bucky-balls because they’re very stable, and a lot is known about them,” said Oleg V. Prezhdo, professor of chemistry at the USC [University of Southern California] Dornsife College of Letters, Arts and Sciences and corresponding author of a paper on the new explosives that was published in The Journal of Physical Chemistry on February 24 [2015].

A March 19, 2015 USC news release by Robert Perkins, which despite its publication date originated the news item, describes current cancer treatments with carbon nanotubes and this new technique with fullerenes,

Carbon nanotubes, close relatives of Bucky-balls, are used already to treat cancer. They can be accumulated in cancer cells and heated up by a laser, which penetrates through surrounding tissues without affecting them and directly targets carbon nanotubes. Modifying carbon nanotubes the same way as the Buckybombs will make the cancer treatment more efficient — reducing the amount of treatment needed, Prezhdo said.

To build the miniature explosives, Prezhdo and his colleagues attached 12 nitrous oxide molecules to a single Bucky-ball and then heated it. Within picoseconds, the Bucky-ball disintegrated — increasing temperature by thousands of degrees in a controlled explosion.

The source of the explosion’s power is the breaking of powerful carbon bonds, which snap apart to bond with oxygen from the nitrous oxide, resulting in the creation of carbon dioxide, Prezhdo said.

I’m glad this technique would make treatment more effective but I do pause at the thought of having exploding buckyballs in my body or, for that matter, anyone else’s.

The research was highlighted earlier this month in a March 5, 2015 article by Lisa Zynga for phys.org,

The buckybomb combines the unique properties of two classes of materials: carbon structures and energetic nanomaterials. Carbon materials such as C60 can be chemically modified fairly easily to change their properties. Meanwhile, NO2 groups are known to contribute to detonation and combustion processes because they are a major source of oxygen. So, the scientists wondered what would happen if NO2 groups were attached to C60 molecules: would the whole thing explode? And how?

The simulations answered these questions by revealing the explosion in step-by-step detail. Starting with an intact buckybomb (technically called dodecanitrofullerene, or C60(NO2)12), the researchers raised the simulated temperature to 1000 K (700 °C). Within a picosecond (10-12 second), the NO2 groups begin to isomerize, rearranging their atoms and forming new groups with some of the carbon atoms from the C60. As a few more picoseconds pass, the C60 structure loses some of its electrons, which interferes with the bonds that hold it together, and, in a flash, the large molecule disintegrates into many tiny pieces of diatomic carbon (C2). What’s left is a mixture of gases including CO2, NO2, and N2, as well as C2.

I encourage you to read Zynga’s article in whole as she provides more scientific detail and she notes that this discovery could have applications for the military and for industry.

Here’s a link to and a citation for the researchers’ paper,

Buckybomb: Reactive Molecular Dynamics Simulation by Vitaly V. Chaban, Eudes Eterno Fileti, and Oleg V. Prezhdo. J. Phys. Chem. Lett., 2015, 6 (5), pp 913–917 DOI: 10.1021/acs.jpclett.5b00120 Publication Date (Web): February 24, 2015

Copyright © 2015 American Chemical Society

This paper is behind a paywall.

More investment money for Canada’s D-Wave Systems (quantum computing)

A Feb. 2, 2015 news item on Nanotechnology Now features D-Wave Systems (located in the Vancouver region, Canada) and its recent funding bonanza of $28M dollars,

Harris & Harris Group, Inc. (Nasdaq:TINY), an investor in transformative companies enabled by disruptive science, notes the announcement by portfolio company, D-Wave Systems, Inc., that it has closed $29 million (CAD) in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million (CAD) raised in 2014. Harris & Harris Group’s total investment in D-Wave is approximately $5.8 million (USD). D-Wave’s announcement also includes highlights of 2014, a year of strong growth and advancement for D-Wave.

A Jan. 29, 2015 D-Wave news release provides more details about the new investment and D-Wave’s 2014 triumphs,

D-Wave Systems Inc., the world’s first quantum computing company, today announced that it has closed $29 million in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million raised in 2014.

“The investment is a testament to the progress D-Wave continues to make as the leader in quantum computing systems,” said Vern Brownell, CEO of D-Wave. “The funding we received in 2014 will advance our quantum hardware and software development, as well as our work on leading edge applications of our systems. By making quantum computing available to more organizations, we’re driving our goal of finding solutions to the most complex optimization and machine learning applications in national defense, computing, research and finance.”

The funding follows a year of strong growth and advancement for D-Wave. Highlights include:

•    Significant progress made towards the release of the next D-Wave quantum system featuring a 1000 qubit processor, which is currently undergoing testing in D-Wave’s labs.
•    The company’s patent portfolio grew to over 150 issued patents worldwide, with 11 new U.S. patents being granted in 2014, covering aspects of D-Wave’s processor technology, systems and techniques for solving computational problems using D-Wave’s technology.
•    D-Wave Professional Services launched, providing quantum computing experts to collaborate directly with customers, and deliver training classes on the usage and programming of the D-Wave system to a number of national laboratories, businesses and universities.
•    Partnerships were established with DNA-SEQ and 1QBit, companies that are developing quantum software applications in the spheres of medicine and finance, respectively.
•    Research throughout the year continued to validate D-Wave’s work, including a study showing further evidence of quantum entanglement by D-Wave and USC  [University of Southern California] scientists, published in Physical Review X this past May.

Since 2011, some of the most prestigious organizations in the world, including Lockheed Martin, NASA, Google, USC and the Universities Space Research Association (USRA), have partnered with D-Wave to use their quantum computing systems. In 2015, these partners will continue to work with the D-Wave computer, conducting pioneering research in machine learning, optimization, and space exploration.

D-Wave, which already employs over 120 people, plans to expand hiring with the additional funding. Key areas of growth include research, processor and systems development and software engineering.

Harris & Harris Group offers a description of D-Wave which mentions nanotechnology and hosts a couple of explanatory videos,

D-Wave Systems develops an adiabatic quantum computer (QC).

Status
Privately Held

The Market
Electronics – High Performance Computing

The Problem
Traditional or “classical computers” are constrained by the sequential character of data processing that makes the solving of non-polynomial (NP)-hard problems difficult or potentially impossible in reasonable timeframes. These types of computationally intense problems are commonly observed in software verifications, scheduling and logistics planning, integer programming, bioinformatics and financial portfolio optimization.

D-Wave’s Solution
D-Wave develops quantum computers that are capable of processing data quantum mechanical properties of matter. This leverage of quantum mechanics enables the identification of solutions to some non-polynomial (NP)-hard problems in a reasonable timeframe, instead of the exponential time needed for any classical digital computer. D-Wave sold and installed its first quantum computing system to a commercial customer in 2011.

Nanotechnology Factor
To function properly, D-wave processor requires tight control and manipulation of quantum mechanical phenomena. This control and manipulation is achieved by creating integrated circuits based on Josephson Junctions and other superconducting circuitry. By picking superconductors, D-wave managed to combine quantum mechanical behavior with macroscopic dimensions needed for hi-yield design and manufacturing.

It seems D-Wave has made some research and funding strides since I last wrote about the company in a Jan. 19, 2012 posting, although there is no mention of quantum computer sales.

Clone your carbon nanotubes

The Nov. 14, 2012 news release on EurekAlert highlights some work on a former nanomaterial superstar, carbon nanotubes,

Scientists and industry experts have long speculated that carbon nanotube transistors would one day replace their silicon predecessors. In 1998, Delft University built the world’s first carbon nanotube transistors – carbon nanotubes have the potential to be far smaller, faster, and consume less power than silicon transistors.

A key reason carbon nanotubes are not in your computer right now is that they are difficult to manufacture in a predictable way. Scientists have had a difficult time controlling the manufacture of nanotubes to the correct diameter, type and ultimately chirality, factors that control nanotubes’ electrical and mechanical properties.

Carbon nanotubes are typically grown using a chemical vapor deposition (CVD) system in which a chemical-laced gas is pumped into a chamber containing substrates with metal catalyst nanoparticles, upon which the nanotubes grow. It is generally believed that the diameters of the nanotubes are determined by the size of the catalytic metal nanoparticles. However, attempts to control the catalysts in hopes of achieving chirality-controlled nanotube growth have not been successful.

The USC [University of Southern California] team’s innovation was to jettison the catalyst and instead plant pieces of carbon nanotubes that have been separated and pre-selected based on chirality, using a nanotube separation technique developed and perfected by Zheng [Ming Zheng] and his coworkers at NIST [US National Institute of Standards and Technology]. Using those pieces as seeds, the team used chemical vapor deposition to extend the seeds to get much longer nanotubes, which were shown to have the same chirality as the seeds..

The process is referred to as “nanotube cloning.” The next steps in the research will be to carefully study the mechanism of the nanotube growth in this system, to scale up the cloning process to get large quantities of chirality-controlled nanotubes, and to use those nanotubes for electronic applications

H/T to ScienceDaily’s Nov. 14, 2012 news item for the full journal reference,

Jia Liu, Chuan Wang, Xiaomin Tu, Bilu Liu, Liang Chen, Ming Zheng, Chongwu Zhou. Chirality-controlled synthesis of single-wall carbon nanotubes using vapour-phase epitaxy. Nat. Commun., 13 Nov, 2012 DOI: 10.1038/ncomms2205

The article is behind a paywall.

Everything becomes part machine

Machine/flesh. That’s what I’ve taken to calling this process of integrating machinery into our and, as I newly realized, other animals’ flesh. My new realization was courtesy of a television ad for Absolut Greyhound Vodka. First, here’s the very, very long (3 mins. 39 secs.) ad/music video version,

I gather the dogs are mostly or, possibly, all animation. Still, the robotic dogs are very thought-provoking.  It’s kind of fascinating to me that I found a very unusual, futuristic, and thought-provoking idea embedded in advertising so I dug around online to find a March 2012 article by Rae Ann Fera, about the ad campaign, written for the Fast (Company} Co-Create website,

In the real world, music and cocktails go hand in hand. In an Absolut world, music and cocktails come with racing robotic greyhounds remotely controlled by a trio of DJs, spurred on by a cast of characters that make Lady Gaga look casual.

“Greyhound”–which is the title of the drink, the video, and the actual music track–is a three-minute visual feast created by TBWA\Chiat\Day that sees three groups of couture-sporting racing enthusiasts converge on the Bonneville Salt Flats to watch some robotic greyhounds speed across the parched plains, all while sipping light pink Absolut Greyhounds. While the fabulous people in the desert give each other the “my team’s going to win” stink-eye, the three members of Swedish House Mafia are off in a desolate bunker remotely controlling the robodogs to a photo-finish while ensconced in holographic orbs. …

Given that “Greyhound” is part music video, part ad, it will be distributed across a number of channels. “When it come to our target, music is their number one passion point and they live in the digital space so the campaign is really going to primarily TV and digital,” says Absolut’s Kouchnir [Maxime Kouchnir, Vice President, Vodkas, Pernod Ricard USA].

The advertisers, of course, are trying to sell vodka by digitally creating a greyhound that’s part robot/part flesh and then setting the stage for this race with music, fashion, cocktails, and an open-ended result. But, if one thinks of advertising as a reflection of culture, then these animated robot/flesh greyhounds suggest that something is percolating in the zeitgeist.

I have other examples on this blog  but here are a few recent  nonadvertising items I’ve come across that support my thesis. First, I found an April 27, 2012 article (MIT Media Lab Hosts The Future) by Neal Ungerleider for Fast Company, from the article,

This week, MIT [Massachusetts Institute of Technology] Media Lab researchers and minds from around the world got together to discuss artificial implantable memories, computers that understand emotion… and Microsoft-funded robotic teddy bears. Will the next Guitar Hero soon be discovered?

….

Then there are the scientists who will be able to plant artificial memories in your head. Ted Berger of the University of Southern California is developing prosthetic brain implants that mimic the mind. Apart from turning recipients into cyborgs, the brain prostheses actually create fake memories, science fiction movie style: In experiments, researchers successfully turned long-term memories on and off in lab rats. Berger hopes in the future, once primate testing is complete, to create brain implants for Alzheimer’s and stroke patients to help restore function.

While erasing and/or creating memories may seem a bit distant from our current experience, the BBC May 3, 2012 news article by Fergus Walsh, describes another machine/flesh project at the human clinical trials stage. Retinal implants have placed in two British men,

The two patients, Chris James and Robin Millar, lost their vision due to a condition known as retinitis pigmentosa, where the photoreceptor cells at the back of the eye gradually cease to function.

The wafer-thin, 3mm square microelectronic chip has 1,500 light-sensitive pixels which take over the function of the photoreceptor rods and cones.

The surgery involves placing it behind the retina from where a fine cable runs to a control unit under the skin behind the ear.

I believe this is the project I described in Aug. 18, 2011 posting (scroll down 2/3 of the way), which has 30 participants in the clinical trials, worldwide.

It sometimes seems that we’re not creating new life through biological means, synthetic or otherwise, but, rather, with our machines, which we are integrating into our own and other animal’s flesh.

What is a diamond worth?

A couple of diamond-related news items have crossed my path lately causing me to consider diamonds and their social implications. I’ll start first with the news items, according to an April 4, 2012 news item on physorg.com a quantum computer has been built inside a diamond (from the news item),

Diamonds are forever – or, at least, the effects of this diamond on quantum computing may be. A team that includes scientists from USC has built a quantum computer in a diamond, the first of its kind to include protection against “decoherence” – noise that prevents the computer from functioning properly.

I last mentioned decoherence in my July 21, 2011 posting about a joint (University of British Columbia, University of California at Santa Barbara and the University of Southern California) project on quantum computing.

According to the April 5, 2012 news item by Robert Perkins for the University of Southern California (USC),

The multinational team included USC professor Daniel Lidar and USC postdoctoral researcher Zhihui Wang, as well as researchers from the Delft University of Technology in the Netherlands, Iowa State University and the University of California, Santa Barbara. The findings were published today in Nature.

The team’s diamond quantum computer system featured two quantum bits, or qubits, made of subatomic particles.

As opposed to traditional computer bits, which can encode distinctly either a one or a zero, qubits can encode a one and a zero at the same time. This property, called superposition, along with the ability of quantum states to “tunnel” through energy barriers, some day will allow quantum computers to perform optimization calculations much faster than traditional computers.

Like all diamonds, the diamond used by the researchers has impurities – things other than carbon. The more impurities in a diamond, the less attractive it is as a piece of jewelry because it makes the crystal appear cloudy.

The team, however, utilized the impurities themselves.

A rogue nitrogen nucleus became the first qubit. In a second flaw sat an electron, which became the second qubit. (Though put more accurately, the “spin” of each of these subatomic particles was used as the qubit.)

Electrons are smaller than nuclei and perform computations much more quickly, but they also fall victim more quickly to decoherence. A qubit based on a nucleus, which is large, is much more stable but slower.

“A nucleus has a long decoherence time – in the milliseconds. You can think of it as very sluggish,” said Lidar, who holds appointments at the USC Viterbi School of Engineering and the USC Dornsife College of Letters, Arts and Sciences.

Though solid-state computing systems have existed before, this was the first to incorporate decoherence protection – using microwave pulses to continually switch the direction of the electron spin rotation.

“It’s a little like time travel,” Lidar said, because switching the direction of rotation time-reverses the inconsistencies in motion as the qubits move back to their original position.

Here’s an image I downloaded from the USC webpage hosting Perkins’s news item,

The diamond in the center measures 1 mm X 1 mm. Photo/Courtesy of Delft University of Technolgy/UC Santa Barbara

I’m not sure what they were trying to illustrate with the image but I thought it would provide an interesting contrast to the video which follows about the world’s first purely diamond ring,

I first came across this ring in Laura Hibberd’s March 22, 2012 piece for Huffington Post. For anyone who feels compelled to find out more about it, here’s the jeweller’s (Shawish) website.

What with the posting about Neal Stephenson and Diamond Age (aka, The Diamond Age Or A Young Lady’s Illustrated Primer; a novel that integrates nanotechnology into a story about the future and ubiquitous diamonds), a quantum computer in a diamond, and this ring, I’ve started to wonder about role diamonds will have in society. Will they be integrated into everyday objects or will they remain objects of desire? My guess is that the diamonds we create by manipulating carbon atoms will be considered everyday items while the ones which have been formed in the bowels of the earth will retain their status.

Math, science and the movies; research on the African continent; diabetes and mice in Canada; NANO Magazine and Canada; poetry on Bowen Island, April 17, 2010

About 10 years ago, I got interested in how the arts and sciences can inform each other when I was trying to organize an art/science event which never did get off the ground (although I still harbour hopes for it one day).  It all came back to me when I read Dave Bruggeman’s (Pasco Phronesis blog) recent post about a new Creative Science Studio opening at the School of Cinematic Arts at the University of Southern California (USC). From Dave’s post,

It [Creative Science Studio] will start this fall at USC, where its School of Cinematic Arts makes heavy use of its proximity to Hollywood, and builds on its history of other projects that use science, technology and entertainment in other areas of research.

The studio will not only help studios improve the depiction of science in the products of their students, faculty and alumni (much like the Science and Entertainment Exchange), but help scientists create entertaining outreach products. In addition, science and engineering topics will be incorporated into the School’s curriculum and be supported in faculty research.

This announcement reminds me a little bit of an IBM/USC initiative in 2008 (from the news item on Nanowerk),

For decades Hollywood has looked to science for inspiration, now IBM researchers are looking to Hollywood for new ideas too.

The entertainment industry has portrayed possible future worlds through science fiction movies – many created by USC’s famous alumni – and IBM wants to tap into that creativity.

At a kickoff event at the USC School of Cinematic Arts, five of IBM’s top scientists met with students and alumni of the school, along with other invitees from the entertainment industry, to “Imagine the World in 2050.” The event is the first phase of an expected collaboration between IBM and USC to explore how combining creative vision and insight with science and technology trends might fuel novel solutions to the most pressing problems and opportunities of our time.

It’s interesting to note that the inspiration is two-way if the two announcements are taken together. The creative people can have access to the latest science and technology work for their pieces and scientists can explore how an idea or solution to a problem that exists in a story might be made real.

I’ve also noted that the first collaboration mentioned suggests that the Creative Science Studio will be able to “help scientists create entertaining outreach products.” My only caveat is that scientists too often believe that science communication means that they do all the communicating while we members of the public are to receive their knowledge enthusiastically and uncritically.

Moving on to the math that I mentioned in the head, there’s an announcement of a new paper that discusses the use of mathematics in cinematic special effects. (I believe that the word cinematic is starting to include games and other media in addition to movies.)  From the news item on physorg.com,

The use of mathematics in cinematic special effects is described in the article “Crashing Waves, Awesome Explosions, Turbulent Smoke, and Beyond: Applied Mathematics and Scientific Computing in the Visual Effects Industry”, which will appear in the May 2010 issue of the NOTICES OF THE AMS [American Mathematical Society]. The article was written by three University of California, Los Angeles, mathematicians who have made significant contributions to research in this area: Aleka McAdams, Stanley Osher, and Joseph Teran.

Mathematics provides the language for expressing physical phenomena and their interactions, often in the form of partial differential equations. These equations are usually too complex to be solved exactly, so mathematicians have developed numerical methods and algorithms that can be implemented on computers to obtain approximate solutions. The kinds of approximations needed to, for example, simulate a firestorm, were in the past computationally intractable. With faster computing equipment and more-efficient architectures, such simulations are feasible today—and they drive many of the most spectacular feats in the visual effects industry.

This news item too brought back memories. There was a Canadian animated film, Ryan, which both won an Academy Award and involved significant collaboration between a mathematician and an animator. From the MITACS (Mathematics of Information Technology and Complex Systems)  2005 newsletter, Student Notes:

Karan Singh is an Associate Professor at the University of Toronto, where co-directs the graphics and HCI lab, DGP. His research interests are in artist driven interactive graphics encompassing geometric modeling, character animation and non-photorealistic rendering. As a researcher at Alias (1995-1999), he architected facial and character animation tools for Maya (Technical Oscar 2003). He was involved with conceptual design and reverse engineering software at Paraform (Academy award for technical achievement 2001) and currently as Chief Scientist for Geometry Systems Inc. He has worked on numerous film and animation projects and most recently was the R+D Director for the Oscar winning animation Ryan (2005)

Someone at Student Notes (SN) goes on to interview Dr. Singh (here’s an excerpt),

SN: Some materials discussing the film Ryan mention the term “psychorealism”. What does this term mean? What problems does the transition from realism to psychorealism pose for the animator, or the computer graphics designer?

KS: Psychorealism is a term coined by Chris {Landreth, film animator] to refer to the glorious complexity of the human psyche depicted through the visual medium of art and animation. The transition is not a problem, psychorealism is stylistic, just a facet to the look and feel of an animation. The challenges lies in the choice and execution of the metaphorical imagery that the animator makes.

Both the article and Dr. Singh’s page are well worth checking out, if the links between mathematics and visual imagery interest you.

Research on the African continent

Last week I received a copy of Thompson Reuters Global Research Report Africa. My hat’s off to the authors, Jonathan Adams, Christopher King, and Daniel Hook for including the fact that Africa is a continent with many countries, many languages, and many cultures. From the report, (you may need to register at the site to gain access to it but the only contact I ever get is a copy of their newsletter alerting me to a new report and other incidental info.), p. 3,

More than 50 nations, hundreds of languages, and a welter of ethnic and cultural diversity. A continent possessed of abundant natural resources but also perennially wracked by a now-familiar litany of post-colonial woes: poverty, want, political instability and corruption, disease, and armed conflicts frequently driven by ethnic and tribal divisions but supplied by more mature economies. OECD’s recent African Economic Outlook sets out in stark detail the challenge, and the extent to which current global economic problems may make this worse …

While they did the usual about challenges, the authors go on to add this somewhat contrasting information.

Yet the continent is also home to a rich history of higher education and knowledge creation. The University of Al-Karaouine, at Fez in Morocco, was founded in CE 859 as a madrasa and is identified by many as the oldest degree-awarding institution in the world.ii It was followed in 970 by Al-Azhar University in Egypt. While it was some centuries before the curriculum expanded from religious instruction into the sciences this makes a very early marker for learning. Today, the Association of African Universities lists 225 member institutions in 44 countries and, as Thomson Reuters data demonstrate, African research has a network of ties to the international community.

A problem for Africa as a whole, as it has been for China and India, is the hemorrhage of talent. Many of its best students take their higher degrees at universities in Europe, Asia and North America. Too few return.

I can’t speak for the details included in the report which appears to be a consolidation of information available in various reports from international organizations. Personally, I find these consolidations very helpful as I would never have the time to track all of this down. As well, they have created a graphic which illustrates research relationships. I did have to read the analysis in order to better understand the graphic but I found the idea itself quite engaging and as I can see (pun!) that as one gets more visually literate with this type of graphic that it could be a very useful tool for grasping complex information very quickly.

Diabetes and mice

Last week, I missed this notice about a Canadian nanotechnology effort at the University of Calgary. From the news item on Nanowerk,

Using a sophisticated nanotechnology-based “vaccine,” researchers were able to successfully cure mice with type 1 diabetes and slow the onset of the disease in mice at risk for the disease. The study, co-funded by the Juvenile Diabetes Research Foundation (JDRF), provides new and important insights into understanding how to stop the immune attack that causes type 1 diabetes, and could even have implications for other autoimmune diseases.

The study, conducted at the University of Calgary in Alberta, Canada, was published today [April 8, 2010?] in the online edition of the scientific journal Immunity.

NANO Magazine

In more recent news, NANO Magazine’s new issue (no. 17) features a country focus on Canada. From the news item on Nanowerk,

In a special bumper issue of NANO Magazine we focus on two topics – textiles and nanomedicine. We feature articles about textiles from Nicholas Kotov and Kay Obendorf, and Nanomedicine from the London Centre for Nanotechnology and Hans Hofstraat of Philips Healthcare and an interview with Peter Singer, NANO Magazine Issue 17 is essential reading, www.nanomagazine.co.uk.

The featured country in this issue is Canada [emphasis mine], notable for its well funded facilities and research that is aggressively focused on industrial applications. Although having no unifying national nanotechnology initiative, there are many extremely well-funded organisations with world class facilities that are undertaking important nano-related research.

I hope I get a chance to read this issue.

Poetry on Bowen Island

Heather Haley, a local Vancouver, BC area, poet is hosting a special event this coming Saturday at her home on Bowen Island. From the news release,

VISITING POETS Salon & Reading

Josef & Heather’s Place
Bowen Island, BC
7:30  PM
Saturday, April 17, 2010

PENN KEMP, inimitable sound poet from London, Ontario

The illustrious CATHERINE OWEN from Vancouver, BC

To RSVP and get directions please email hshaley@emspace.com

Free Admission
Snacks & beverages-BYOB

Please come on over to our place on the sunny south slope to welcome these fabulous poets, hear their marvelous work, *see* their voices right here on Bowen Island!

London, ON performer and playwright PENN KEMP has published twenty-five books of poetry and drama, had six plays and ten CDs produced as well as Canada’s first poetry CD-ROM and several videopoems.  She performs in festivals around the world, most recently in Britain, Brazil and India. Penn is the Canada Council Writer-in-Residence at UWO for 2009-10.  She hosts an eclectic literary show, Gathering Voices, on Radio Western, CHRWradio.com/talk/gatheringvoices.  Her own project for the year is a DVD devoted to Ecco Poetry, Luminous Entrance: a Sound Opera for Climate Change Action, which has just been released.
CATHERINE OWEN is a Vancouver writer who will be reading from her latest book Frenzy (Anvil Press 09) which she has just toured across the entirety of Canada. Her work has appeared in international magazines, seen translation into three languages and been nominated for honours such as the BC Book Prize and the CBC Award. She plays bass and sings in a couple of metal bands and runs her own tutoring and editing business.

I have seen one of Penn Kemp’s video poems. It was at least five years ago and it still resonates with me . Guess what? I highly recommend going if you can. If you’re curious about Heather and her work, go here.