Tag Archives: Google

D-Wave upgrades Google’s quantum computing capabilities

Vancouver-based (more accurately, Burnaby-based) D-Wave systems has scored a coup as key customers have upgraded from a 512-qubit system to a system with over 1,000 qubits. (The technical breakthrough and concomitant interest from the business community was mentioned here in a June 26, 2015 posting.) As for the latest business breakthrough, here’s more from a Sept. 28, 2015 D-Wave press release,

D-Wave Systems Inc., the world’s first quantum computing company, announced that it has entered into a new agreement covering the installation of a succession of D-Wave systems located at NASA’s Ames Research Center in Moffett Field, California. This agreement supports collaboration among Google, NASA and USRA (Universities Space Research Association) that is dedicated to studying how quantum computing can advance artificial intelligence and machine learning, and the solution of difficult optimization problems. The new agreement enables Google and its partners to keep their D-Wave system at the state-of-the-art for up to seven years, with new generations of D-Wave systems to be installed at NASA Ames as they become available.

“The new agreement is the largest order in D-Wave’s history, and indicative of the importance of quantum computing in its evolution toward solving problems that are difficult for even the largest supercomputers,” said D-Wave CEO Vern Brownell. “We highly value the commitment that our partners have made to D-Wave and our technology, and are excited about the potential use of our systems for machine learning and complex optimization problems.”

Cade Wetz’s Sept. 28, 2015 article for Wired magazine provides some interesting observations about D-Wave computers along with some explanations of quantum computing (Note: Links have been removed),

Though the D-Wave machine is less powerful than many scientists hope quantum computers will one day be, the leap to 1000 qubits represents an exponential improvement in what the machine is capable of. What is it capable of? Google and its partners are still trying to figure that out. But Google has said it’s confident there are situations where the D-Wave can outperform today’s non-quantum machines, and scientists at the University of Southern California [USC] have published research suggesting that the D-Wave exhibits behavior beyond classical physics.

A quantum computer operates according to the principles of quantum mechanics, the physics of very small things, such as electrons and photons. In a classical computer, a transistor stores a single “bit” of information. If the transistor is “on,” it holds a 1, and if it’s “off,” it holds a 0. But in quantum computer, thanks to what’s called the superposition principle, information is held in a quantum system that can exist in two states at the same time. This “qubit” can store a 0 and 1 simultaneously.

Two qubits, then, can hold four values at any given time (00, 01, 10, and 11). And as you keep increasing the number of qubits, you exponentially increase the power of the system. The problem is that building a qubit is a extreme difficult thing. If you read information from a quantum system, it “decoheres.” Basically, it turns into a classical bit that houses only a single value.

D-Wave claims to have a found a solution to the decoherence problem and that appears to be borne out by the USC researchers. Still, it isn’t a general quantum computer (from Wetz’s article),

… researchers at USC say that the system appears to display a phenomenon called “quantum annealing” that suggests it’s truly operating in the quantum realm. Regardless, the D-Wave is not a general quantum computer—that is, it’s not a computer for just any task. But D-Wave says the machine is well-suited to “optimization” problems, where you’re facing many, many different ways forward and must pick the best option, and to machine learning, where computers teach themselves tasks by analyzing large amount of data.

It takes a lot of innovation before you make big strides forward and I think D-Wave is to be congratulated on producing what is to my knowledge the only commercially available form of quantum computing of any sort in the world.

ETA Oct. 6, 2015* at 1230 hours PST: Minutes after publishing about D-Wave I came across this item (h/t Quirks & Quarks twitter) about Australian researchers and their quantum computing breakthrough. From an Oct. 6, 2015 article by Hannah Francis for the Sydney (Australia) Morning Herald,

For decades scientists have been trying to turn quantum computing — which allows for multiple calculations to happen at once, making it immeasurably faster than standard computing — into a practical reality rather than a moonshot theory. Until now, they have largely relied on “exotic” materials to construct quantum computers, making them unsuitable for commercial production.

But researchers at the University of New South Wales have patented a new design, published in the scientific journal Nature on Tuesday, created specifically with computer industry manufacturing standards in mind and using affordable silicon, which is found in regular computer chips like those we use every day in smartphones or tablets.

“Our team at UNSW has just cleared a major hurdle to making quantum computing a reality,” the director of the university’s Australian National Fabrication Facility, Andrew Dzurak, the project’s leader, said.

“As well as demonstrating the first quantum logic gate in silicon, we’ve also designed and patented a way to scale this technology to millions of qubits using standard industrial manufacturing techniques to build the world’s first quantum processor chip.”

According to the article, the university is looking for industrial partners to help them exploit this breakthrough. Fisher’s article features an embedded video, as well as, more detail.

*It was Oct. 6, 2015 in Australia but Oct. 5, 2015 my side of the international date line.

ETA Oct. 6, 2015 (my side of the international date line): An Oct. 5, 2015 University of New South Wales news release on EurekAlert provides additional details.

Here’s a link to and a citation for the paper,

A two-qubit logic gate in silicon by M. Veldhorst, C. H. Yang, J. C. C. Hwang, W. Huang,    J. P. Dehollain, J. T. Muhonen, S. Simmons, A. Laucht, F. E. Hudson, K. M. Itoh, A. Morello    & A. S. Dzurak. Nature (2015 doi:10.1038/nature15263 Published online 05 October 2015

This paper is behind a paywall.

D-Wave passes 1000-qubit barrier

A local (Vancouver, Canada-based, quantum computing company, D-Wave is making quite a splash lately due to a technical breakthrough.  h/t’s Speaking up for Canadian Science for Business in Vancouver article and Nanotechnology Now for Harris & Harris Group press release and Economist article.

A June 22, 2015 article by Tyler Orton for Business in Vancouver describes D-Wave’s latest technical breakthrough,

“This updated processor will allow significantly more complex computational problems to be solved than ever before,” Jeremy Hilton, D-Wave’s vice-president of processor development, wrote in a June 22 [2015] blog entry.

Regular computers use two bits – ones and zeroes – to make calculations, while quantum computers rely on qubits.

Qubits possess a “superposition” that allow it to be one and zero at the same time, meaning it can calculate all possible values in a single operation.

But the algorithm for a full-scale quantum computer requires 8,000 qubits.

A June 23, 2015 Harris & Harris Group press release adds more information about the breakthrough,

Harris & Harris Group, Inc. (Nasdaq: TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has successfully fabricated 1,000 qubit processors that power its quantum computers.  D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.”  Every additional qubit doubles the search space of the processor.  At 1,000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which is substantially larger than the 2512 possibilities available to the company’s currently available 512 qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.

A June 22, 2015 D-Wave news release, which originated the technical details about the breakthrough found in the Harris & Harris press release, provides more information along with some marketing hype (hyperbole), Note: Links have been removed,

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.​
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.

“Breaking the 1000 qubit barrier marks the culmination of years of research and development by our scientists, engineers and manufacturing team,” said D-Wave CEO Vern Brownell. “It is a critical step toward bringing the promise of quantum computing to bear on some of the most challenging technical, commercial, scientific, and national defense problems that organizations face.”

A June 20, 2015 article in The Economist notes there is now commercial interest as it provides good introductory information about quantum computing. The article includes an analysis of various research efforts in Canada (they mention D-Wave), the US, and the UK. These excerpts don’t do justice to the article but will hopefully whet your appetite or provide an overview for anyone with limited time,

A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one. At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

… The biggest question is what the qubits themselves should be made from.

A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans. Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite. So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise.

Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers. The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more.

It’s not clear to me if the writers at The Economist were aware of  D-Wave’s latest breakthrough at the time of writing but I think not. In any event, they (The Economist writers) have included a provocative tidbit about quantum encryption,

Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA [Intellligence Advanced Research Projects Agency], the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits. In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

I encourage you to read the Economist article.

Two final comments. (1) The latest piece, prior to this one, about D-Wave was in a Feb. 6, 2015 posting about then new investment into the company. (2) A Canadian effort in the field of quantum cryptography was mentioned in a May 11, 2015 posting (scroll down about 50% of the way) featuring a profile of Raymond Laflamme, at the University of Waterloo’s Institute of Quantum Computing in the context of an announcement about science media initiative Research2Reality.

More investment money for Canada’s D-Wave Systems (quantum computing)

A Feb. 2, 2015 news item on Nanotechnology Now features D-Wave Systems (located in the Vancouver region, Canada) and its recent funding bonanza of $28M dollars,

Harris & Harris Group, Inc. (Nasdaq:TINY), an investor in transformative companies enabled by disruptive science, notes the announcement by portfolio company, D-Wave Systems, Inc., that it has closed $29 million (CAD) in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million (CAD) raised in 2014. Harris & Harris Group’s total investment in D-Wave is approximately $5.8 million (USD). D-Wave’s announcement also includes highlights of 2014, a year of strong growth and advancement for D-Wave.

A Jan. 29, 2015 D-Wave news release provides more details about the new investment and D-Wave’s 2014 triumphs,

D-Wave Systems Inc., the world’s first quantum computing company, today announced that it has closed $29 million in funding from a large institutional investor, among others. This funding will be used to accelerate development of D-Wave’s quantum hardware and software and expand the software application ecosystem. This investment brings total funding in D-Wave to $174 million (CAD), with approximately $62 million raised in 2014.

“The investment is a testament to the progress D-Wave continues to make as the leader in quantum computing systems,” said Vern Brownell, CEO of D-Wave. “The funding we received in 2014 will advance our quantum hardware and software development, as well as our work on leading edge applications of our systems. By making quantum computing available to more organizations, we’re driving our goal of finding solutions to the most complex optimization and machine learning applications in national defense, computing, research and finance.”

The funding follows a year of strong growth and advancement for D-Wave. Highlights include:

•    Significant progress made towards the release of the next D-Wave quantum system featuring a 1000 qubit processor, which is currently undergoing testing in D-Wave’s labs.
•    The company’s patent portfolio grew to over 150 issued patents worldwide, with 11 new U.S. patents being granted in 2014, covering aspects of D-Wave’s processor technology, systems and techniques for solving computational problems using D-Wave’s technology.
•    D-Wave Professional Services launched, providing quantum computing experts to collaborate directly with customers, and deliver training classes on the usage and programming of the D-Wave system to a number of national laboratories, businesses and universities.
•    Partnerships were established with DNA-SEQ and 1QBit, companies that are developing quantum software applications in the spheres of medicine and finance, respectively.
•    Research throughout the year continued to validate D-Wave’s work, including a study showing further evidence of quantum entanglement by D-Wave and USC  [University of Southern California] scientists, published in Physical Review X this past May.

Since 2011, some of the most prestigious organizations in the world, including Lockheed Martin, NASA, Google, USC and the Universities Space Research Association (USRA), have partnered with D-Wave to use their quantum computing systems. In 2015, these partners will continue to work with the D-Wave computer, conducting pioneering research in machine learning, optimization, and space exploration.

D-Wave, which already employs over 120 people, plans to expand hiring with the additional funding. Key areas of growth include research, processor and systems development and software engineering.

Harris & Harris Group offers a description of D-Wave which mentions nanotechnology and hosts a couple of explanatory videos,

D-Wave Systems develops an adiabatic quantum computer (QC).

Privately Held

The Market
Electronics – High Performance Computing

The Problem
Traditional or “classical computers” are constrained by the sequential character of data processing that makes the solving of non-polynomial (NP)-hard problems difficult or potentially impossible in reasonable timeframes. These types of computationally intense problems are commonly observed in software verifications, scheduling and logistics planning, integer programming, bioinformatics and financial portfolio optimization.

D-Wave’s Solution
D-Wave develops quantum computers that are capable of processing data quantum mechanical properties of matter. This leverage of quantum mechanics enables the identification of solutions to some non-polynomial (NP)-hard problems in a reasonable timeframe, instead of the exponential time needed for any classical digital computer. D-Wave sold and installed its first quantum computing system to a commercial customer in 2011.

Nanotechnology Factor
To function properly, D-wave processor requires tight control and manipulation of quantum mechanical phenomena. This control and manipulation is achieved by creating integrated circuits based on Josephson Junctions and other superconducting circuitry. By picking superconductors, D-wave managed to combine quantum mechanical behavior with macroscopic dimensions needed for hi-yield design and manufacturing.

It seems D-Wave has made some research and funding strides since I last wrote about the company in a Jan. 19, 2012 posting, although there is no mention of quantum computer sales.

Robo Brain; a new robot learning project

Having covered the RoboEarth project (a European Union funded ‘internet for robots’ first mentioned here in a Feb. 14, 2011 posting [scroll down about 1/4 of the way] and again in a March 12 2013 posting about the project’s cloud engine, Rapyuta and. most recently in a Jan. 14, 2014 posting), an Aug. 25, 2014 Cornell University news release by Bill Steele (also on EurekAlert with some editorial changes) about the US Robo Brain project immediately caught my attention,

Robo Brain – a large-scale computational system that learns from publicly available Internet resources – is currently downloading and processing about 1 billion images, 120,000 YouTube videos, and 100 million how-to documents and appliance manuals. The information is being translated and stored in a robot-friendly format that robots will be able to draw on when they need it.

The news release spells out why and how researchers have created Robo Brain,

To serve as helpers in our homes, offices and factories, robots will need to understand how the world works and how the humans around them behave. Robotics researchers have been teaching them these things one at a time: How to find your keys, pour a drink, put away dishes, and when not to interrupt two people having a conversation.

This will all come in one package with Robo Brain, a giant repository of knowledge collected from the Internet and stored in a robot-friendly format that robots will be able to draw on when they need it. [emphasis mine]

“Our laptops and cell phones have access to all the information we want. If a robot encounters a situation it hasn’t seen before it can query Robo Brain in the cloud,” explained Ashutosh Saxena, assistant professor of computer science.

Saxena and colleagues at Cornell, Stanford and Brown universities and the University of California, Berkeley, started in July to download about one billion images, 120,000 YouTube videos and 100 million how-to documents and appliance manuals, along with all the training they have already given the various robots in their own laboratories. Robo Brain will process images to pick out the objects in them, and by connecting images and video with text, it will learn to recognize objects and how they are used, along with human language and behavior.

Saxena described the project at the 2014 Robotics: Science and Systems Conference, July 12-16 [2014] in Berkeley.

If a robot sees a coffee mug, it can learn from Robo Brain not only that it’s a coffee mug, but also that liquids can be poured into or out of it, that it can be grasped by the handle, and that it must be carried upright when it is full, as opposed to when it is being carried from the dishwasher to the cupboard.

The system employs what computer scientists call “structured deep learning,” where information is stored in many levels of abstraction. An easy chair is a member of the class of chairs, and going up another level, chairs are furniture. Sitting is something you can do on a chair, but a human can also sit on a stool, a bench or the lawn.

A robot’s computer brain stores what it has learned in a form mathematicians call a Markov model, which can be represented graphically as a set of points connected by lines (formally called nodes and edges). The nodes could represent objects, actions or parts of an image, and each one is assigned a probability – how much you can vary it and still be correct. In searching for knowledge, a robot’s brain makes its own chain and looks for one in the knowledge base that matches within those probability limits.

“The Robo Brain will look like a gigantic, branching graph with abilities for multidimensional queries,” said Aditya Jami, a visiting researcher at Cornell who designed the large-scale database for the brain. It might look something like a chart of relationships between Facebook friends but more on the scale of the Milky Way.

Like a human learner, Robo Brain will have teachers, thanks to crowdsourcing. The Robo Brain website will display things the brain has learned, and visitors will be able to make additions and corrections.

The “robot-friendly format” for information in the European project (RoboEarth) meant machine language but if I understand what’s written in the news release correctly, this project incorporates a mix of machine language and natural (human) language.

This is one of the times the funding sources (US National Science Foundation, two of the armed forces, businesses and a couple of not-for-profit agencies) seem particularly interesting (from the news release),

The project is supported by the National Science Foundation, the Office of Naval Research, the Army Research Office, Google, Microsoft, Qualcomm, the Alfred P. Sloan Foundation and the National Robotics Initiative, whose goal is to advance robotics to help make the United States more competitive in the world economy.

For the curious, here’s a link to the Robo Brain and RoboEarth websites.

Printing food, changing prostheses, and talking with Google (Larry Page) at TED 2014′s Session 6: Wired

I’m covering two speakers and an interview from this session. First, Avi Reichental, CEO (Chief Executive Officer) 3D Sytems, from his TED biography (Note: A link has been removed),

At 3D Systems, Avi Reichental is helping to imagine a future where 3D scanning-and-printing is an everyday act, and food, clothing, objects are routinely output at home.

Lately, he’s been demo-ing the Cube, a tabletop 3D printer that can print a basketball-sized object, and the ChefJet, a food-grade machine that prints in sugar and chocolate. His company is also rolling out consumer-grade 3D scanning cameras that clip to a tablet to capture three-dimensional objects for printing out later. He’s an instructor at Singularity University (watch his 4-minute intro to 3D printing).

Reichental started by talking about his grandfather, a cobbler who died in the Holocaust and whom he’d never met. Nonetheless, his grandfather had inspired him to be a maker of things in a society where craftsmanship and crafting atrophied until recently with the rise of ‘maker’ culture and 3D printing.

There were a number of items on the stage, shoes, a cake, a guitar and more, all of which had been 3D printed. Reichental’s shoes had also been produced on a 3D printer. If I understand his dream properly, it is to enable everyone to make what they need more cheaply and better.

Next, Hugh Herr, bionics designer, from his TED biography,

Hugh Herr directs the Biomechatronics research group at the MIT Media Lab, where he is pioneering a new class of biohybrid smart prostheses and exoskeletons to improve the quality of life for thousands of people with physical challenges. A computer-controlled prosthesis called the Rheo Knee, for instance, is outfitted with a microprocessor that continually senses the joint’s position and the loads applied to the limb. A powered ankle-foot prosthesis called the BiOM emulates the action of a biological leg to create a natural gait, allowing amputees to walk with normal levels of speed and metabolism as if their legs were biological.

Herr is the founder and chief technology officer of BiOM Inc., which markets the BiOM as the first in a series of products that will emulate or even augment physiological function through electromechanical replacement. You can call it (as they do) “personal bionics.”

Herr walked on his two bionic limbs onto the TED stage. He not only researches and works in the field of bionics, he lives it. His name was mentioned in a previous presentation by David Sengeh (can be found in my March 17, 2014 posting), a 2014 TED Fellow.

Herr talked about biomimcry, i.e., following nature’s lead in design but he also suggested that design is driving (affecting) nature.  If I understand him rightly, he was referencing some of the work with proteins, ligands, etc. and creating devices that are not what we would consider biological or natural as we have tended to use the term.

His talk contrasted somewhat with Reichental’s as Herr wants to remove the artisanal approach to developing prosthetics and replacing the artisanal with data-driven strategies. Herr covered the mechanical, the dynamic, and the electrical as applied to bionic limbs. I think the term prosthetic is being applied the older, artisanal limbs as opposed to these mechanical, electrical, dynamic marvels known as bionic limbs.

The mechanical aspect has to do with figuring out how your specific limbs are formed and used and getting precise measurements (with robotic tools) because everyone is a little bit different. The dynamic aspect, also highly individual, is how your muscles work. For example, standing still, walking, etc. all require dynamic responses from your muscles. Finally, there’s the integration with the nervous system so you can feel your limb.

Herr shows a few videos including one of a woman who lost part of her leg in last year’s Boston Marathon bombing (April 15, 2013). A ballroom dancer, Herr invites her to the stage so she can perform in front of the TED 2014 audience. She got a standing ovation.

In the midst of session 6, there was an interview conducted by Charlie Rose (US television presenter) with Larry Page, a co-founder of Google.

Very briefly, I was mildly relieved (although I’m not convinced) to hear that Page is devoted to the notion that search is important. I’ve been concerned about the Google search results I get. Those results seem less rich and interesting than they were a few years ago. I attribute the situation to the chase for advertising dollars and a decreasing interest in ‘search’ as the company expands with initiatives such as ‘Google glass’, artificial intelligence, and pursues other interests distinct from what had been the company’s core focus.

I didn’t find much else of interest. Larry Page wants to help people and he’s interested in artificial intelligence and transportation. His perspective seemed a bit simplistic (technology will solve our problems) but perhaps that was for the benefit of people like me. I suspect one of a speaker’s challenges at TED is finding the right level. Certainly, I’ve experienced difficulties with some of the more technical presentations.

One more observation, there was no mention of a current scandal at Google profiled in the April 2014 issue of Vanity Fair, (by Vanessa Grigoriadis)

 O.K., Glass: Make Google Eyes

The story behind Google co-founder Sergey Brin’s liaison with Google Glass marketing manager Amanda Rosenberg—and his split from his wife, genetic-testing entrepreneur Anne Wojcicki— has a decidedly futuristic edge. But, as Vanessa Grigoriadis reports, the drama leaves Silicon Valley debating emotional issues, from office romance to fear of mortality.

Given that Page agreed to be on the TED stage in the last 10 days, this appearance seems like an attempt at damage control especially with the mention of Brin who had his picture taken with the telepresent Ed Snowden on Tuesday, March 18, 2014 at TED 2014.

Unintended consequences of reading science news online

University of Wisconsin-Madison researchers Dominique Brossard and  Dietram Scheufele have written a cautionary piece for the AAAS’s (American Association for the Advancement of Science) magazine, Science, according to a Jan. 3, 2013 news item on ScienceDaily,

A science-inclined audience and wide array of communications tools make the Internet an excellent opportunity for scientists hoping to share their research with the world. But that opportunity is fraught with unintended consequences, according to a pair of University of Wisconsin-Madison life sciences communication professors.

Dominique Brossard and Dietram Scheufele, writing in a Perspectives piece for the journal Science, encourage scientists to join an effort to make sure the public receives full, accurate and unbiased information on science and technology.

“This is an opportunity to promote interest in science — especially basic research, fundamental science — but, on the other hand, we could be missing the boat,” Brossard says. “Even our most well-intended effort could backfire, because we don’t understand the ways these same tools can work against us.”

The Jan. 3, 2012 University of Wisconsin-Madison news release by Chris Barncard (which originated the news item) notes,

Recent research by Brossard and Scheufele has described the way the Internet may be narrowing public discourse, and new work shows that a staple of online news presentation — the comments section — and other ubiquitous means to provide endorsement or feedback can color the opinions of readers of even the most neutral science stories.

Online news sources pare down discussion or limit visibility of some information in several ways, according to Brossard and Scheufele.

Many news sites use the popularity of stories or subjects (measured by the numbers of clicks they receive, or the rate at which users share that content with others, or other metrics) to guide the presentation of material.

The search engine Google offers users suggested search terms as they make requests, offering up “nanotechnology in medicine,” for example, to those who begin typing “nanotechnology” in a search box. Users often avail themselves of the list of suggestions, making certain searches more popular, which in turn makes those search terms even more likely to appear as suggestions.

Brossard and Scheufele have published an earlier study about the ‘narrowing’ effects of search engines such as Google, using the example of the topic ‘nanotechnology’, as per my May 19, 2010 posting. The researchers appear to be building on this earlier work,

The consequences become more daunting for the researchers as Brossard and Scheufele uncover more surprising effects of Web 2.0.

In their newest study, they show that independent of the content of an article about a new technological development, the tone of comments posted by other readers can make a significant difference in the way new readers feel about the article’s subject. The less civil the accompanying comments, the more risk readers attributed to the research described in the news story.

“The day of reading a story and then turning the page to read another is over,” Scheufele says. “Now each story is surrounded by numbers of Facebook likes and tweets and comments that color the way readers interpret even truly unbiased information. This will produce more and more unintended effects on readers, and unless we understand what those are and even capitalize on them, they will just cause more and more problems.”

If even some of the for-profit media world and advocacy organizations are approaching the digital landscape from a marketing perspective, Brossard and Scheufele argue, scientists need to turn to more empirical communications research and engage in active discussions across disciplines of how to most effectively reach large audiences.

“It’s not because there is not decent science writing out there. We know all kinds of excellent writers and sources,” Brossard says. “But can people be certain that those are the sites they will find when they search for information? That is not clear.”

It’s not about preparing for the future. It’s about catching up to the present. And the present, Scheufele says, includes scientific subjects — think fracking, or synthetic biology — that need debate and input from the public.

Here’s a citation and link for the Science article,

Science, New Media, and the Public by Dominique Brossard and Dietram A. Scheufele in Science 4 January 2013: Vol. 339 no. 6115 pp. 40-41 DOI: 10.1126/science.1232329

This article is behind a paywall.

UN’s International Telecommunications Union holds patent summit in Geneva on Oct. 10, 2012

The International Telecommunications Union (ITU) patent summit being held today (Oct. 10, 2012) in Geneva, Switzerland was announced in July 2012 as noted in this July 6, 2012 news item on the BBC News website,

A rash of patent lawsuits has prompted the UN to call smartphone makers and others mobile industry bodies together.

It said the parties needed to address the “innovation-stifling use of intellectual property” which had led to several devices being banned from sale.

It said innovations deemed essential to industry standards, such as 3G or Jpeg photos, would be the meeting’s focus.

It noted that if just one patent holder demanded unreasonable compensation the cost of a device could “skyrocket”.

Microsoft and Apple are among firms that have called on others not to enforce sales bans on the basis of such standards-essential patents.

However, lawyers have noted that doing so would deprive other companies of way to counter-attacking other types of patent lawsuits pursued by the two companies.

Here’s a sample of the activity that has led to convening this summit (excerpted from the BBC news item),

“We are seeing an unwelcome trend in today’s marketplace to use standards-essential patents to block markets,” said the ITU secretary general Dr Hamadoun Toure.

Motorola Mobility – now owned by Google – managed to impose a brief sales ban of iPhone and iPads in Germany last year after Apple refused to pay it a licence fee. The dispute centred on a patent deemed crucial to the GPRS data transmission standard used by GSM cellular networks.

Samsung has also attempted to use its 3G patents to bar Apple from selling products in Europe, Japan and the US.

However, industry watchers note that Apple has used lawsuits to ban Samsung products in both the US and Australia and attempted to restrict sales of other companies’ devices powered by Android.

Mike Masnick commented briefly about the summit in his July 12, 2012 posting on Techdirt,

The UN’s International Telecommunication Union (ITU) — the same unit looking at very questionable plans concerning taxing the internet — has apparently decided that it also needs to step in over the massive patent thicket around smartphones. It’s convening a summit … it looks like they’re only inviting the big companies who make products, and leaving the many trolls out of it. Also, it’s unclear from the description if the ITU really grasps the root causes of the problem: the system itself. …

There’s more information on the ITU summit or patent roundtable webpage,

This Roundtable will assess the effectiveness of RAND (reasonable and non-discriminatory) – based patent policies. The purpose of this initiative is to provide a neutral venue for industry, standards bodies and regulators to exchange innovative ideas that can guide future discussions on whether current patent policies and existing industry practices adequately respond to the needs of the various stakeholders.

I was particularly interested in the speakers from industry (from the Patent Roundtable programme/agenda),

Segment 1 (Part II: Specific perspectives of certain key stakeholders in “360 view” format):

Moderator: Mr. Knut Blind, Rotterdam School of Management [ Biography ]

Perspectives from certain key stakeholders:

  • Standard Development Organizations:
    Mr. Antoine Dore, ITU
    [ Biography ]
    Mr. Dirk Weiler, ETSI
    [ Biography ]
  • Industry players:
    Mr. BJ Watrous, Apple
    [ Biography ]
    Mr. Ray Warren, Motorola Mobility
    [ Biography ]
    Mr. Michael Fröhlich, RIM [emphasis mine]
    [ Biography ]
  • Patent offices:
    Mr. Michel Goudelis, European Patent Office
    [ Biography ]
    Mr. Stuart Graham, United States Patent and Trademark Office
    [ Biography ]
  • Academic Institution:
    Mr. Tim Pohlmann, Technical University of Berlin

I was surprised to note the presence of a Canadian company at the summit.

In general, hopes do not seem high that anything will be resolved putting me in mind of Middle Eastern peace talks, which have stretched on for decades with no immediate end in sight. We’ll see.

Future of Film & Video event being livestreamed from Dublin’s Science Gallery July 13, 2012

As I’ve noted previously (my April 29, 2011 posting) Dublin is celebrating itself as a ‘City of Science’ this year. As part of the festivities (e.g. the Euroscience Open Forum [ESOF} meetings are now taking place in Dublin), the Future of Film & Video at the Science Gallery will be livestreamed on Friday, July 13, 2012 from 1800 to 1930 hours (10 am – 11:30 am PST), from the event page,

Join Academy award winners Anil Kokaram and Simon Robinson, and BAFTA award winner Mark Jacobs as they discuss the future of film and video, from today’s cutting-edge 3D tech, to tomorrow’s innovations being imagined in labs across the world. You’ll never look at a screen the same way as these visionaries show that in the film and video industry you should expect the unexpected.

This event is part of the UCD Imagine Science Film Festival, and is part of Dublin City of Science. We are grateful for the support of Google Dublin, the Chrome-Media Group at Google, Mountain View, the Sigmedia Group in the Engineering Dept, Trinity College Dublin and also Science Foundation Ireland.”

Simon Robinson

Academy Award winner, Simon Robinson is a Founder and the Chief Scientist of The Foundry, one of the most well recognised names in the creation of visual effects software. His technology has touched most of the blockbusters that reach our screens today e.g. Oscar Winning titles Hugo, Rango and effects laden works such as The Matrix, The Lord of the Rings and Avatar. In 2007 he was awarded a SciTech Academy Award for his influence on motion picture technology and in 2010 he was ranked in the top 100 most creative people in business in the fast Company’s annual ranking. His company has made the Sunday Times tech track top 100 list for two years in a row. The Foundry now numbers over 100 employees and speaking to the FT recently Simon is quoted as saying , “We never wanted to grow beyond six staff. We never thought we would sell it. We never thought we would buy it back. We are often wrong.”

Mark Jacobs

Mark Jacobs is a BAFTA award winning Producer/Director with a unique track record in innovation. His extensive experience of more than 25 years in broadcasting, with the BBC and other organisations, ranges from traditional programme making and commissioning, to delivering cutting edge innovation. Mark pioneered some of the first applications of 3D animation for both the BBC and Discovery and in 2000 he joined the BBC’s R&D arm to help pioneer new ways of using multimedia content.  Mark has recently produced a 40 minute, multi-screen interactive film for the Natural History Museum with David Attenborough and led the BBC’s series of natural history documentary trials for stereo 3D production. He has a BAFTA for Interactive TV/ Mobile and introduced some of the first tests in computer graphics and augmented reality into the BBC. He has produced many award winning films for BBC series, ranging from Wildlife On One and Supersense to landmark series on the natural history of Polynesia and Central America and also a programme on the Dingle Dolphin!

Anil Kokaram

Academy award winner, Anil Kokaram is a Professor at Trinity College Dublin with a long history in developing new technologies for digital video processing and particularly in the art of making old movies look like new. He started a company called GreenParrotPictures in 2004 which specialised in translating cinematic effects tools into the semi-professional and consumer space. In 2007 Anil was awarded a SciTech Academy award for his work in developing motion estimation technology for the cinema industry in collaboration with Simon Robinson.  GreenParrotPictures was acquired by Google in 2011 and Anil now heads a team of engineers in the Chrome Media Group in the Googleplex, Mountain View, California developing new video tools for Chrome and YouTube.  He continues to collaborate with his research group www.sigmedia.tv in Trinity College Dublin.


Paccar Theatre


Free – prebooking essential  [go to event page to prebook]

I’m hoping this will be focussed on something other than the future of 3D technology.

DARPA/Google and Regina Dugan

One of my more recent (Nov. 22, 2011) postings on DARPA (Defense Advanced Research Projects Agency) highlighted their entrepreneurial focus and the person encouraging that focus, agency director Regina Dugan. Given that she’s held the position for roughly 2.5 years, I was surprised to see that she has left to joint Google. From the Mar.13, 2012 news item on physorg.com,

Google on Monday [March 12, 2012] confirmed that Defense Advanced Research Projects Agency chief Regina Dugan is taking a yet-to-be-revealed role at the Internet powerhouse.

Dugan’s Wikipedia entry has already been updated,

Regina E. Dugan was the 19th Director of Defense Advanced Research Projects Agency (DARPA). She was appointed to that position on July 20, 2009. In March 2012, she left her position to take an executive role at Google. She was the first female director of DARPA.

Much of her working career (1996-2012) seems to have been spent at DARPA. I don’t think I’m going to draw too many conclusions from this move but I am intrigued especially in light of an essay about a departing Google employee, James Whitaker. From Whitaker’s March 13, 2012 posting on his JW on Tech blog,

The Google I was passionate about was a technology company that empowered its employees to innovate. The Google I left was an advertising company with a single corporate-mandated focus.

Technically I suppose Google has always been an advertising company, but for the better part of the last three years, it didn’t feel like one. Google was an ad company only in the sense that a good TV show is an ad company: having great content attracts advertisers.

He lays out the situation here,

It turns out that there was one place where the Google innovation machine faltered and that one place mattered a lot: competing with Facebook. Informal efforts produced a couple of antisocial dogs in Wave and Buzz. Orkut never caught on outside Brazil. Like the proverbial hare confident enough in its lead to risk a brief nap, Google awoke from its social dreaming to find its front runner status in ads threatened.

Google could still put ads in front of more people than Facebook, but Facebook knows so much more about those people. Advertisers and publishers cherish this kind of personal information, so much so that they are willing to put the Facebook brand before their own. Exhibit A: www.facebook.com/nike, a company with the power and clout of Nike putting their own brand after Facebook’s? No company has ever done that for Google and Google took it personally.

Larry Page himself assumed command to right this wrong. Social became state-owned, a corporate mandate called Google+. It was an ominous name invoking the feeling that Google alone wasn’t enough. Search had to be social. Android had to be social. You Tube, once joyous in their independence, had to be … well, you get the point.  [emphasis mine] Even worse was that innovation had to be social. Ideas that failed to put Google+ at the center of the universe were a distraction.

That point about YouTube really strikes home as I’ve become quite dismayed with the advertising on the videos. The consequence is that I’m starting to search for clips on Vimeo first as it doesn’t have intrusive advertising.

Getting back to Whitaker, he notes this about Google and advertising,

The old Google made a fortune on ads because they had good content. It was like TV used to be: make the best show and you get the most ad revenue from commercials. The new Google seems more focused on the commercials themselves.

It’s interesting to contrast Whitaker’s take on the situation, which suggests that the company has lost its entrepreneurial spirit as it focuses on advertising, with the company’s latest hire, Regina Dugan who seems to have introduced entrepreneurship into DARPA’s activities.

As for the military connection (DARPA is US Dept. of Defense agency), I remain mindful that the military and the intelligence communities have an interest in gathering data but would need something more substantive than a hiring decision to draw any conclusions.

For anyone who’s interested in these types of queries, I would suggest reading a 2007 posting, Facebook, the CIA, and You on the Brainsturbator blog, for a careful unpacking of the connections (extremely tenuous) between Facebook and the CIA (US Central Intelligence Agency). The blog owner and essayist, Jordan Boland, doesn’t dismiss the surveillance concern; he’s simply pointing out that it’s difficult to make an unequivocal claim while displaying a number of intriguing connections between agencies and organizations.

Ranking atoms the Google way

According to the Feb. 13, 2012 news item on Nanowerk, professor Aurora Clark has developed a laboratory-free technique for analyzing molecules which she derived from Google’s PageRank software,

The technology that Google uses to analyze trillions of Web pages is being brought to bear on the way molecules are shaped and organized.

Aurora Clark, an associate professor of chemistry at Washington State University, has adapted Google’s PageRank software to create moleculaRnetworks, which scientists can use to determine molecular shapes and chemical reactions without the expense, logistics and occasional danger of lab experiments.

I was particularly interested in this relationship between webpages and molecules,

Google’s PageRank software, developed by its founders at Stanford University, uses an algorithm—a set of mathematical formulas—to measure and prioritize the relevance of various Web pages to a user’s search. Clark and her colleagues realized that the interactions between molecules are a lot like links between Web pages. Some links between some molecules will be stronger and more likely than others.

“So the same algorithm that is used to understand how Web pages are connected can be used to understand how molecules interact,” says Clark.

The PageRank algorithm is particularly efficient because it can look at a massive amount of the Web at once. Similarly, it can quickly characterize the interactions of millions of molecules and help researchers predict how various chemicals will react with one another.

Clark has a special interest given her specialty,

Clark, who uses Pacific Northwest National Laboratories supercomputers and a computer cluster on WSU’s Pullman campus, specializes in the remediation and separation of radioactive materials. With computational chemistry and her Google-based software, she says, she “can learn about all those really nasty things without ever touching them.”

You can find out more about moleculaRnetworks and download the software from this webpage. There’s more about Aurora Clark and her work here.