Tag Archives: US National Security Agency

Could CRISPR (clustered regularly interspaced short palindromic repeats) be weaponized?

On the occasion of an American team’s recent publication of research where they edited the germline (embryos), I produced a three-part series about CRISPR (clustered regularly interspaced short palindromic repeats), sometimes referred to as CRISPR/Cas9, (links offered at end of this post).

Somewhere in my series, there’s a quote about how CRISPR could be used as a ‘weapon of mass destruction’ and it seems this has been a hot topic for the last year or so as James Revill, research fellow at the University of Sussex, references in his August 31, 2017 essay on theconversation.com (h/t phys.org August 31, 2017 news item), Note: Links have been removed,

The gene editing technique CRISPR has been in the limelight after scientists reported they had used it to safely remove disease in human embryos for the first time. This follows a “CRISPR craze” over the last couple of years, with the number of academic publications on the topic growing steadily.

There are good reasons for the widespread attention to CRISPR. The technique allows scientists to “cut and paste” DNA more easily than in the past. It is being applied to a number of different peaceful areas, ranging from cancer therapies to the control of disease carrying insects.

Some of these applications – such as the engineering of mosquitoes to resist the parasite that causes malaria – effectively involve tinkering with ecosystems. CRISPR has therefore generated a number of ethical and safety concerns. Some also worry that applications being explored by defence organisations that involve “responsible innovation in gene editing” may send worrying signals to other states.

Concerns are also mounting that gene editing could be used in the development of biological weapons. In 2016, Bill Gates remarked that “the next epidemic could originate on the computer screen of a terrorist intent on using genetic engineering to create a synthetic version of the smallpox virus”. More recently, in July 2017, John Sotos, of Intel Health & Life Sciences, stated that gene editing research could “open up the potential for bioweapons of unimaginable destructive potential”.

An annual worldwide threat assessment report of the US intelligence community in February 2016 argued that the broad availability and low cost of the basic ingredients of technologies like CRISPR makes it particularly concerning.

A Feb. 11, 2016 news item on sciencemagazine.org offers a précis of some of the reactions while a February 9, 2016 article by Antonio Regalado for the Massachusetts Institute of Technology’s MIT Technology Review delves into the matter more deeply,

Genome editing is a weapon of mass destruction.

That’s according to James Clapper, [former] U.S. director of national intelligence, who on Tuesday, in the annual worldwide threat assessment report of the U.S. intelligence community, added gene editing to a list of threats posed by “weapons of mass destruction and proliferation.”

Gene editing refers to several novel ways to alter the DNA inside living cells. The most popular method, CRISPR, has been revolutionizing scientific research, leading to novel animals and crops, and is likely to power a new generation of gene treatments for serious diseases (see “Everything You Need to Know About CRISPR’s Monster Year”).

It is gene editing’s relative ease of use that worries the U.S. intelligence community, according to the assessment. “Given the broad distribution, low cost, and accelerated pace of development of this dual-use technology, its deliberate or unintentional misuse might lead to far-reaching economic and national security implications,” the report said.

The choice by the U.S. spy chief to call out gene editing as a potential weapon of mass destruction, or WMD, surprised some experts. It was the only biotechnology appearing in a tally of six more conventional threats, like North Korea’s suspected nuclear detonation on January 6 [2016], Syria’s undeclared chemical weapons, and new Russian cruise missiles that might violate an international treaty.

The report is an unclassified version of the “collective insights” of the Central Intelligence Agency, the National Security Agency, and half a dozen other U.S. spy and fact-gathering operations.

Although the report doesn’t mention CRISPR by name, Clapper clearly had the newest and the most versatile of the gene-editing systems in mind. The CRISPR technique’s low cost and relative ease of use—the basic ingredients can be bought online for $60—seems to have spooked intelligence agencies.

….

However, one has to be careful with the hype surrounding new technologies and, at present, the security implications of CRISPR are probably modest. There are easier, cruder methods of creating terror. CRISPR would only get aspiring biological terrorists so far. Other steps, such as growing and disseminating biological weapons agents, would typically be required for it to become an effective weapon. This would require additional skills and places CRISPR-based biological weapons beyond the reach of most terrorist groups. At least for the time being.

A July 5, 2016 opinion piece by Malcolm Dando for Nature argues for greater safeguards,

In Geneva next month [August 2016], officials will discuss updates to the global treaty that outlaws the use of biological weapons. The 1972 Biological Weapons Convention (BWC) was the first agreement to ban an entire class of weapons, and it remains a crucial instrument to stop scientific research on viruses, bacteria and toxins from being diverted into military programmes.

The BWC is the best route to ensure that nations take the biological-weapons threat seriously. Most countries have struggled to develop and introduce strong and effective national programmes — witness the difficulty the United States had in agreeing what oversight system should be applied to gain-of-function experiments that created more- dangerous lab-grown versions of common pathogens.

As scientific work advances — the CRISPR gene-editing system has been flagged as the latest example of possible dual-use technology — this treaty needs to be regularly updated. This is especially important because it has no formal verification system. Proposals for declarations, monitoring visits and inspections were vetoed by the United States in 2001, on the grounds that such verification threatened national security and confidential business information.

Even so, issues such as the possible dual-use threat from gene-editing systems will not be easily resolved. But we have to try. Without the involvement of the BWC, codes of conduct and oversight systems set up at national level are unlikely to be effective. The stakes are high, and after years of fumbling, we need strong international action to monitor and assess the threats from the new age of biological techniques.

Revill notes the latest BWC agreement and suggests future directions,

This convention is imperfect and lacks a way to ensure that states are compliant. Moreover, it has not been adequately “tended to” by its member states recently, with the last major meeting unable to agree a further programme of work. Yet it remains the cornerstone of an international regime against the hostile use of biology. All 178 state parties declared in December of 2016 their continued determination “to exclude completely the possibility of the use of (biological) weapons, and their conviction that such use would be repugnant to the conscience of humankind”.

These states therefore need to address the hostile potential of CRISPR. Moreover, they need to do so collectively. Unilateral national measures, such as reasonable biological security procedures, are important. However, preventing the hostile exploitation of CRISPR is not something that can be achieved by any single state acting alone.

As such, when states party to the convention meet later this year, it will be important to agree to a more systematic and regular review of science and technology. Such reviews can help with identifying and managing the security risks of technologies such as CRISPR, as well as allowing an international exchange of information on some of the potential benefits of such technologies.

Most states supported the principle of enhanced reviews of science and technology under the convention at the last major meeting. But they now need to seize the opportunity and agree on the practicalities of such reviews in order to prevent the convention being left behind by developments in science and technology.

Experts (military, intelligence, medical, etc.) are not the only ones concerned about CRISPR according to a February 11, 2016 article by Sharon Begley for statnews.com (Note: A link has been removed),

Most Americans oppose using powerful new technology to alter the genes of unborn babies, according to a new poll — even to prevent serious inherited diseases.

They expressed the strongest disapproval for editing genes to create “designer babies” with enhanced intelligence or looks.

But the poll, conducted by STAT and Harvard T.H. Chan School of Public Health, found that people have mixed, and apparently not firm, views on emerging genetic techniques. US adults are almost evenly split on whether the federal government should fund research on editing genes before birth to keep children from developing diseases such as cystic fibrosis or Huntington’s disease.

“They’re not against scientists trying to improve [genome-editing] technologies,” said Robert Blendon, professor of health policy and political analysis at Harvard’s Chan School, perhaps because they recognize that one day there might be a compelling reason to use such technologies. An unexpected event, such as scientists “eliminating a terrible disease” that a child would have otherwise inherited, “could change people’s views in the years ahead,” Blendon said.

But for now, he added, “people are concerned about editing the genes of those who are yet unborn.”

A majority, however, wants government regulators to approve gene therapy to treat diseases in children and adults.

The STAT-Harvard poll comes as scientists and policy makers confront the ethical, social, and legal implications of these revolutionary tools for changing DNA. Thanks to a technique called CRISPR-Cas9, scientists can easily, and with increasing precision, modify genes through the genetic analog of a computer’s “find and replace” function.

I find it surprising that there’s resistance to removing diseases found in the germline (embryos). When they were doing public consultations on nanotechnology, the one area where people tended to be quite open to research was health and medicine. Where food was concerned however, people had far more concerns.

If you’re interested in the STAT-Harvard poll, you can find it here. As for James Revill, he has written a more substantive version of this essay as a paper, which is available here.

On a semi-related note, I found STAT (statnews.com) to be a quite interesting and accessibly written online health science journal. Here’s more from the About Us page (Note: A link has been removed),

What’s STAT all about?
STAT is a national publication focused on finding and telling compelling stories about health, medicine, and scientific discovery. We produce daily news, investigative articles, and narrative projects in addition to multimedia features. We tell our stories from the places that matter to our readers — research labs, hospitals, executive suites, and political campaigns.

Why did you call it STAT?
In medical parlance, “stat” means important and urgent, and that’s what we’re all about — quickly and smartly delivering good stories. Read more about the origins of our name here.

Who’s behind the new publication?
STAT is produced by Boston Globe Media. Our headquarters is located in Boston but we have bureaus in Washington, New York, Cleveland, Atlanta, San Francisco, and Los Angeles. It was started by John Henry, the owner of Boston Globe Media and the principal owner of the Boston Red Sox. Rick Berke is executive editor.

So is STAT part of The Boston Globe?
They’re distinct properties but the two share content and complement one another.

Is it free?
Much of STAT is free. We also offer STAT Plus, a premium subscription plan that includes exclusive reporting about the pharmaceutical and biotech industries as well as other benefits. Learn more about it here.

Who’s working for STAT?
Some of the best-sourced science, health, and biotech journalists in the country, as well as motion graphics artists and data visualization specialists. Our team includes talented writers, editors, and producers capable of the kind of explanatory journalism that complicated science issues sometimes demand.

Who’s your audience?
You. Even if you don’t work in science, have never stepped foot in a hospital, or hated high school biology, we’ve got something for you. And for the lab scientists, health professionals, business leaders, and policy makers, we think you’ll find coverage here that interests you, too. The world of health, science, and medicine is booming and yielding fascinating stories. We explore how they affect us all.

….

As promised, here are the links to my three-part series on CRISPR,

Part 1 opens the series with a basic description of CRISPR and the germline research that occasioned the series along with some of the other (non-weapon) ethical issues and patent disputes that are arising from this new technology. CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

Finally, I hope to stumble across studies from other countries about how they are responding to the possibilities presented by CRISPR/Cas9 so that I can offer a more global perspective than this largely US perspective. At the very least, it would be interesting to find it if there differences.

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

D-Wave passes 1000-qubit barrier

A local (Vancouver, Canada-based, quantum computing company, D-Wave is making quite a splash lately due to a technical breakthrough.  h/t’s Speaking up for Canadian Science for Business in Vancouver article and Nanotechnology Now for Harris & Harris Group press release and Economist article.

A June 22, 2015 article by Tyler Orton for Business in Vancouver describes D-Wave’s latest technical breakthrough,

“This updated processor will allow significantly more complex computational problems to be solved than ever before,” Jeremy Hilton, D-Wave’s vice-president of processor development, wrote in a June 22 [2015] blog entry.

Regular computers use two bits – ones and zeroes – to make calculations, while quantum computers rely on qubits.

Qubits possess a “superposition” that allow it to be one and zero at the same time, meaning it can calculate all possible values in a single operation.

But the algorithm for a full-scale quantum computer requires 8,000 qubits.

A June 23, 2015 Harris & Harris Group press release adds more information about the breakthrough,

Harris & Harris Group, Inc. (Nasdaq: TINY), an investor in transformative companies enabled by disruptive science, notes that its portfolio company, D-Wave Systems, Inc., announced that it has successfully fabricated 1,000 qubit processors that power its quantum computers.  D-Wave’s quantum computer runs a quantum annealing algorithm to find the lowest points, corresponding to optimal or near optimal solutions, in a virtual “energy landscape.”  Every additional qubit doubles the search space of the processor.  At 1,000 qubits, the new processor considers 21000 possibilities simultaneously, a search space which is substantially larger than the 2512 possibilities available to the company’s currently available 512 qubit D-Wave Two. In fact, the new search space contains far more possibilities than there are particles in the observable universe.

A June 22, 2015 D-Wave news release, which originated the technical details about the breakthrough found in the Harris & Harris press release, provides more information along with some marketing hype (hyperbole), Note: Links have been removed,

As the only manufacturer of scalable quantum processors, D-Wave breaks new ground with every succeeding generation it develops. The new processors, comprising over 128,000 Josephson tunnel junctions, are believed to be the most complex superconductor integrated circuits ever successfully yielded. They are fabricated in part at D-Wave’s facilities in Palo Alto, CA and at Cypress Semiconductor’s wafer foundry located in Bloomington, Minnesota.

“Temperature, noise, and precision all play a profound role in how well quantum processors solve problems.  Beyond scaling up the technology by doubling the number of qubits, we also achieved key technology advances prioritized around their impact on performance,” said Jeremy Hilton, D-Wave vice president, processor development. “We expect to release benchmarking data that demonstrate new levels of performance later this year.”

The 1000-qubit milestone is the result of intensive research and development by D-Wave and reflects a triumph over a variety of design challenges aimed at enhancing performance and boosting solution quality. Beyond the much larger number of qubits, other significant innovations include:

  •  Lower Operating Temperature: While the previous generation processor ran at a temperature close to absolute zero, the new processor runs 40% colder. The lower operating temperature enhances the importance of quantum effects, which increases the ability to discriminate the best result from a collection of good candidates.​
  • Reduced Noise: Through a combination of improved design, architectural enhancements and materials changes, noise levels have been reduced by 50% in comparison to the previous generation. The lower noise environment enhances problem-solving performance while boosting reliability and stability.
  • Increased Control Circuitry Precision: In the testing to date, the increased precision coupled with the noise reduction has demonstrated improved precision by up to 40%. To accomplish both while also improving manufacturing yield is a significant achievement.
  • Advanced Fabrication:  The new processors comprise over 128,000 Josephson junctions (tunnel junctions with superconducting electrodes) in a 6-metal layer planar process with 0.25μm features, believed to be the most complex superconductor integrated circuits ever built.
  • New Modes of Use: The new technology expands the boundaries of ways to exploit quantum resources.  In addition to performing discrete optimization like its predecessor, firmware and software upgrades will make it easier to use the system for sampling applications.

“Breaking the 1000 qubit barrier marks the culmination of years of research and development by our scientists, engineers and manufacturing team,” said D-Wave CEO Vern Brownell. “It is a critical step toward bringing the promise of quantum computing to bear on some of the most challenging technical, commercial, scientific, and national defense problems that organizations face.”

A June 20, 2015 article in The Economist notes there is now commercial interest as it provides good introductory information about quantum computing. The article includes an analysis of various research efforts in Canada (they mention D-Wave), the US, and the UK. These excerpts don’t do justice to the article but will hopefully whet your appetite or provide an overview for anyone with limited time,

A COMPUTER proceeds one step at a time. At any particular moment, each of its bits—the binary digits it adds and subtracts to arrive at its conclusions—has a single, definite value: zero or one. At that moment the machine is in just one state, a particular mixture of zeros and ones. It can therefore perform only one calculation next. This puts a limit on its power. To increase that power, you have to make it work faster.

But bits do not exist in the abstract. Each depends for its reality on the physical state of part of the computer’s processor or memory. And physical states, at the quantum level, are not as clear-cut as classical physics pretends. That leaves engineers a bit of wriggle room. By exploiting certain quantum effects they can create bits, known as qubits, that do not have a definite value, thus overcoming classical computing’s limits.

… The biggest question is what the qubits themselves should be made from.

A qubit needs a physical system with two opposite quantum states, such as the direction of spin of an electron orbiting an atomic nucleus. Several things which can do the job exist, and each has its fans. Some suggest nitrogen atoms trapped in the crystal lattices of diamonds. Calcium ions held in the grip of magnetic fields are another favourite. So are the photons of which light is composed (in this case the qubit would be stored in the plane of polarisation). And quasiparticles, which are vibrations in matter that behave like real subatomic particles, also have a following.

The leading candidate at the moment, though, is to use a superconductor in which the qubit is either the direction of a circulating current, or the presence or absence of an electric charge. Both Google and IBM are banking on this approach. It has the advantage that superconducting qubits can be arranged on semiconductor chips of the sort used in existing computers. That, the two firms think, should make them easier to commercialise.

Google is also collaborating with D-Wave of Vancouver, Canada, which sells what it calls quantum annealers. The field’s practitioners took much convincing that these devices really do exploit the quantum advantage, and in any case they are limited to a narrower set of problems—such as searching for images similar to a reference image. But such searches are just the type of application of interest to Google. In 2013, in collaboration with NASA and USRA, a research consortium, the firm bought a D-Wave machine in order to put it through its paces. Hartmut Neven, director of engineering at Google Research, is guarded about what his team has found, but he believes D-Wave’s approach is best suited to calculations involving fewer qubits, while Dr Martinis and his colleagues build devices with more.

It’s not clear to me if the writers at The Economist were aware of  D-Wave’s latest breakthrough at the time of writing but I think not. In any event, they (The Economist writers) have included a provocative tidbit about quantum encryption,

Documents released by Edward Snowden, a whistleblower, revealed that the Penetrating Hard Targets programme of America’s National Security Agency was actively researching “if, and how, a cryptologically useful quantum computer can be built”. In May IARPA [Intellligence Advanced Research Projects Agency], the American government’s intelligence-research arm, issued a call for partners in its Logical Qubits programme, to make robust, error-free qubits. In April, meanwhile, Tanja Lange and Daniel Bernstein of Eindhoven University of Technology, in the Netherlands, announced PQCRYPTO, a programme to advance and standardise “post-quantum cryptography”. They are concerned that encrypted communications captured now could be subjected to quantum cracking in the future. That means strong pre-emptive encryption is needed immediately.

I encourage you to read the Economist article.

Two final comments. (1) The latest piece, prior to this one, about D-Wave was in a Feb. 6, 2015 posting about then new investment into the company. (2) A Canadian effort in the field of quantum cryptography was mentioned in a May 11, 2015 posting (scroll down about 50% of the way) featuring a profile of Raymond Laflamme, at the University of Waterloo’s Institute of Quantum Computing in the context of an announcement about science media initiative Research2Reality.

‘Eddie’ the robot, US National Security Agency talks back to Ed Snowden, at TED 2014′s Session 8: Hacked

The session started 30 minutes earlier than originally scheduled and as a consequence I got to the party a little late. First up, Marco Tempest, magician and technoillusionist, introduced and played with EDI (electronic deceptive intelligence; pronounced Eddy), a large, anthropomorphic robot (it had a comic book style face on the screen used for its face and was reminiscent of Ed Snowden’s appearance in a telepresent robot). This was a slick presentation combining magic and robotics bringing to mind Arthur C. Clarke’s comment, “Any sufficiently advanced technology is indistinguishable from magic,which I’m sure Tempest mentioned before I got there. Interestingly, he articulated the robot’s perspective that humans are fragile and unpredictable inspiring fear and uncertainty in the robot. It’s the first time I’ve encountered our relationship from the robot’s perspective,. Thank you Mr. Tempest.

Rick Ledgett, deputy director of the US National Science Agency (NSA), appeared on screen as he attended remotely but not telepresently as Ed Snowden did earlier this week to be interviewed by a TED moderator (Chris Anderson, I think). Technical problems meant the interview was interrupted and stopped while the tech guys scrambled to fix the problem. Before he was interrupted, Ledgett answered a question as to whether or not Snowden could have taken alternative actions. Ledgett made clear that he (and presumably the NSA) does not consider Snowden to be a whistleblower. It was a little confusing to me but it seemed to me that Ledgett was suggesting that whistleblowing is legitimate only when down to the corporate sector. As well, Ledgett said that Snowden could have reported to his superiors and to various oversight agencies rather than making his findings public. These responses, of course, are predictable so what made the interview interesting was Ledgett’s demeanour. He was careful not to say anything inflammatory and seemed reasonable. He is the right person to representing the NSA. He doesn’t seem to know how dangerous and difficult whistleblowing whether it’s done to a corporate entity or a government agency. Whether or not you agree with Snowden’s actions, the response to them is a classic response. I went to a talk some years ago and the speaker, Mark Wexler who teaches business ethics at Simon Fraser University, said that whistleblowers often lose their careers, their relationships, and their families due to the pressures brought to bear on them.

Ledgett rejoins the TED stage after Kurzweil and it sounds like he has been huddling with a communications team as he reframes his and Snowden’s participation as part of an important conversation. Clearly, the TED team has been in touch with Snowden who refutes Ledgett’s suggestions about alternative routes. Now. Ledgett talks tough as he describes Snowden as arrogant. He states somewhere in all this that Snowden’s actions have endangered lives and the moderator presses him for examples. Ledgett’s response features examples that are general and scenario-based. When pressed Ledgett indulges in a little sarcasm suggesting that things would be easier with badboy.com as a site where nefarious individuals would hang out. Ledgett makea some valid points about the need for some secrecy and he does state that he feels transparency is important and the NSA has not been good about it. Ledgett notes that every country in the world has a means of forcing companies to reveal information about users and he notes that some countries are using  the notion (effectively lying) that they don’t force revelations as a marketing tool. the interview switches to a discussion of metadata, its importance, and whether or not it provides more information about them individually than most people realize. Ledgett refutes that notion. I have to go, hope to get back and point you to other reports with more info. about this fascinating interview.

Ed Yong, uber science blogger, from his TED biography,

Ed Yong blogs with a mission: igniting excitement for science in everyone, regardless of their education or background.

The award-winning blog Not Exactly Rocket Science (hosted by National Geographic) is the epicenter of Yong’s formidable web and social media presence. In its posts, he tackles the hottest and most bizarre topics in science journalism. When not blogging, he also manages to contribute to Nature, Wired, Scientific American and many other web and print outlets. As he says, “The only one that matters to me, as far as my blog is concerned, is that something interests me. That is, excites or inspires or amuses me.”

Yong talked about mind-controlling parasites such as tapeworms and Gordian worms in the context of his fascination with how the parasites control animal behaviour. (i posted about a parasite infecting and controlling honey bees in an Aug. 2, 2012 piece.) Yong is liberal with his sexual references such as castrating, mind-controliing parasites in a very witty way. He also suggests that humans may in some instances (estimates suggest up to 1/3 of us) be controlled by parasites and our notions of individual autonomy are a little over-blown.

Ray Kurzweil, Mr. Singularity, describes evolution and suggests that humans are not evolving quickly enough given rapidly changing circumstances. He focuses on human brains and the current theories about their processing capabilities and segues into artificial intelligence. He makes the case that we are preparing for a quantitative leap in intelligence as our organic brains are augmented by the artificial.

Kurzweil was last mentioned here in a Jan. 6, 2010 posting in the context of reverse-engineering brains.

Surprise: telepresent Ed Snowden at TED 2014′s Session 2: Retrospect

The first session (Retrospect) this morning held a few surprises, i.e, unexpected speakers, Brian Greene and Ed Snowden (whistleblower re: extensive and [illegal or nonlegal?] surveillance by the US National Security Agency [NSA]). I’m not sure how Snowden fits into the session theme of Retrospect but I think that’s less the point than the sheer breathtaking surprise and his topic’s importance to current public discourse around much of the globe.

Snowden is mostly focused on PRISM (from its Wikipedia entry; Note: Links have been removed),

PRISM is a clandestine mass electronic surveillance data mining program launched in 2007 by the National Security Agency (NSA), with participation from an unknown date by the British equivalent agency, GCHQ.[1][2][3] PRISM is a government code name for a data-collection effort known officially by the SIGAD US-984XN.[4][5] The Prism program collects stored Internet communications based on demands made to Internet companies such as Google Inc. and Apple Inc. under Section 702 of the FISA Amendments Act of 2008 to turn over any data that match court-approved search terms.[6] The NSA can use these Prism requests to target communications that were encrypted when they traveled across the Internet backbone, to focus on stored data that telecommunication filtering systems discarded earlier,[7][8] and to get data that is easier to handle, among other things.[9]

He also described Boundless Informant in response to a question from the session co-moderator, Chris Anderson (from its Wikipedia entry; Note: Links have been removed),

Boundless Informant or BOUNDLESSINFORMANT is a big data analysis and data visualization tool used by the United States National Security Agency (NSA). It gives NSA managers summaries of the NSA’s world wide data collection activities by counting metadata.[1] The existence of this tool was disclosed by documents leaked by Edward Snowden, who worked at the NSA for the defense contractor Booz Allen Hamilton.[2]

Anderson asks Snowden, “Why should we care [about increased surveillance]? After all we’re not doing anything wrong.” Snowden response notes that we have a right to privacy and that our actions can be misinterpreted or used against us at any time, present or future.

Anderson mentions Dick Cheney and Snowden notes that Cheney has in the past made some overblown comments about Assange which he (Cheney) now dismisses in the face of what he now considers to be Snowden’s greater trespass.

Snowden is now commenting on the NSA’s attempt to undermine internet security by misleading their partners. He again makes a plea for privacy. He also notes that US security has largely been defensive, i.e., protection against other countries’ attempts to get US secrets. These latest programmes change US security from a defensive strategy to an offensive strategy (football metaphor). These changes have been made without public scrutiny.

Anderson asks Snowden about his personal safety.  His response (more or less), “I go to sleep every morning thinking about what I can do to help the American people. … I’m happy to do what I can.”

Anderson asks the audience members whether they think Snowden’s was a reckless act or an heroic act. Some hands go up for reckless, more hands go up for heroic, and many hands remain still.

Snowden, “We need to keep the internet safe for us and if we don’t act we will lose our freedom.”

Anderson asks Tim Berners-Lee to come up to the stage and the discussion turns to his (Berners-Lee) proposal for a Magna Carta for the internet.

Tim Berners-Lee biography from his Wikipedia entry,

Sir Timothy John “Tim” Berners-Lee, OM, KBE, FRS, FREng, FRSA, DFBCS (born 8 June 1955), also known as “TimBL”, is a British computer scientist, best known as the inventor of the World Wide Web. He made a proposal for an information management system in March 1989,[4] and he implemented the first successful communication between a Hypertext Transfer Protocol (HTTP) client and server via the Internet sometime around mid November.[5][6][7][8][9]

Berners-Lee is the director of the World Wide Web Consortium (W3C), which oversees the Web’s continued development. He is also the founder of the World Wide Web Foundation, and is a senior researcher and holder of the Founders Chair at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).[10] He is a director of the Web Science Research Initiative (WSRI),[11] and a member of the advisory board of the MIT Center for Collective Intelligence.[12][13]

The Magna Carta (from its Wikipedia entry; Note: Links have been removed),

Magna Carta (Latin for Great Charter),[1] also called Magna Carta Libertatum or The Great Charter of the Liberties of England, is an Angevin charter originally issued in Latin in June 1215. It was sealed under oath by King John at Runnymede, on the bank of the River Thames near Windsor, England at June 15, 1215.[2]

Magna Carta was the first document forced onto a King of England by a group of his subjects, the feudal barons, in an attempt to limit his powers by law and protect their rights.

The charter is widely known throughout the English speaking world as an important part of the protracted historical process that led to the rule of constitutional law in England and beyond.

When asked by Anderson if he would return to the US if given amnesty, Snowden says yes as long as he can continue his work. He’s not willing to trade his work of bringing these issues to the public forefront in order to go home again.

German nanotechnology industry mood lightens

A March 11, 2014 news item on Nanowerk proclaims a mood change for some sectors, including nanotechnology, of German industry,

For the German companies dealing with microtechnology, nanotechnology, advanced materials and optical technologies, business in 2013 has developed just as the industry had predicted earlier that year: at a constant level. But things are supposed to get better in 2014. The companies do not expect an enormous growth, but they are more positive than they have been ever since the outbreak of the financial and economic crisis. Orders, production and sales figures are expected to rise noticeably in 2014. Areas excluded from an optimistic outlook are staff and financing: the numbers of employees is likely to remain static in 2014 while the funding situation might even reach a new low.

The March 11, 2014 IVAN news release, which originated the news item, provides more detail about this change of mood,

The situation and mood of the micro- and nanotechnology industry, which the IVAM Microtechnology Network queried in a recent economic data survey, coincides with the overall economic development in Germany and general forecasts for 2014. According to publications of the German Federal Statistical Office, the gross domestic product in Germany has grown by only 0.4 percent in 2013 – the lowest growth since the crisis year 2009. For 2014, the Ifo Institute predicts a strong growth for the German economy. Especially exports are expected to increase.

The German micro- and nanotechnology industry is expecting increases during 2014 above all in the area of orders. Production and sales are likely to rise for more than 60 percent of companies each. But just a quarter of companies intend to hire more staff. Only one tenth of companies expect increases in the field of financing. Nevertheless, 30 percent of companies are planning to make investments, which is a higher proportion than in previous years.

The new research funding program of the European Union, Horizon 2020, has aroused certain hopes of enhancing financing opportunities for innovation projects. Compared to the 7th Framework Program, Horizon 2020 is designed in a way that means to facilitate access to EU funding for small and medium-sized enterprises. Especially small companies are still a little sceptical in this regard.

In the IVAM survey, 43 percent of micro- and nanotechnology companies say that EU funding is essential for them in order to implement their innovation projects. Three quarter of companies are planning to apply for funds from the new program. But only 23 percent of companies think that their opportunities to obtain EU funding have improved with Horizon 2020. Many small high-tech enterprises presume that the application still takes too much time and effort. However, since the program had just started at the time of survey, there are no experiences that might confirm or refute these first impressions.

The NSA surveillance scandal has caused a great insecurity among the micro- and nanotechnology companies in Germany concerning the safety of their technical knowledge. The majority of respondents (54 percent) would not even make a guess at whether their company’s know-how is safe from spying. A quarter of companies believe that they are sufficiently safe from spying. Only 21 percent are convinced that they do not have adequate protection. A little more than a third of companies have drawn consequences from the NSA scandal and taken steps to enhance the safety of their data.

Most companies agree that each company is responsible to ensure the best possible safety of its data. But still, almost 90 percent demand that authorities like national governments and the European Commission should intervene and impose stricter regulations. They feel that although each company bears a partial responsibility, the state must also fulfil its responsibilities, establish a clear legal framework for data security, make sure that regulations are complied with, and impose sanctions when they are not.

IVAM has provided this chart amongst others to illustrate their data,

Courtesy: IVAM. [downloaded from http://www.ivam.de/news/pm_ivam_survey_2014]

Courtesy: IVAM. [downloaded from http://www.ivam.de/news/pm_ivam_survey_2014]

You can access the 2014 IVAM survey from this page.