Tag Archives: John Markoff

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

Brain-to-brain communication, organic computers, and BAM (brain activity map), the connectome

Miguel Nicolelis, a professor at Duke University, has been making international headlines lately with two brain projects. The first one about implanting a brain chip that allows rats to perceive infrared light was mentioned in my Feb. 15, 2013 posting. The latest project is a brain-to-brain (rats) communication project as per a Feb. 28, 2013 news release on *EurekAlert,

Researchers have electronically linked the brains of pairs of rats for the first time, enabling them to communicate directly to solve simple behavioral puzzles. A further test of this work successfully linked the brains of two animals thousands of miles apart—one in Durham, N.C., and one in Natal, Brazil.

The results of these projects suggest the future potential for linking multiple brains to form what the research team is calling an “organic computer,” which could allow sharing of motor and sensory information among groups of animals. The study was published Feb. 28, 2013, in the journal Scientific Reports.

“Our previous studies with brain-machine interfaces had convinced us that the rat brain was much more plastic than we had previously thought,” said Miguel Nicolelis, M.D., PhD, lead author of the publication and professor of neurobiology at Duke University School of Medicine. “In those experiments, the rat brain was able to adapt easily to accept input from devices outside the body and even learn how to process invisible infrared light generated by an artificial sensor. So, the question we asked was, ‘if the brain could assimilate signals from artificial sensors, could it also assimilate information input from sensors from a different body?'”

Ben Schiller in a Mar. 1, 2013 article for Fast Company describes both the latest experiment and the work leading up to it,

First, two rats were trained to press a lever when a light went on in their cage. Press the right lever, and they would get a reward–a sip of water. The animals were then split in two: one cage had a lever with a light, while another had a lever without a light. When the first rat pressed the lever, the researchers sent electrical activity from its brain to the second rat. It pressed the right lever 70% of the time (more than half).

In another experiment, the rats seemed to collaborate. When the second rat didn’t push the right lever, the first rat was denied a drink. That seemed to encourage the first to improve its signals, raising the second rat’s lever-pushing success rate.

Finally, to show that brain-communication would work at a distance, the researchers put one rat in an cage in North Carolina, and another in Natal, Brazil. Despite noise on the Internet connection, the brain-link worked just as well–the rate at which the second rat pushed the lever was similar to the experiment conducted solely in the U.S.

The Duke University Feb. 28, 2013 news release, the origin for the news release on EurekAlert, provides more specific details about the experiments and the rats’ training,

To test this hypothesis, the researchers first trained pairs of rats to solve a simple problem: to press the correct lever when an indicator light above the lever switched on, which rewarded the rats with a sip of water. They next connected the two animals’ brains via arrays of microelectrodes inserted into the area of the cortex that processes motor information.

One of the two rodents was designated as the “encoder” animal. This animal received a visual cue that showed it which lever to press in exchange for a water reward. Once this “encoder” rat pressed the right lever, a sample of its brain activity that coded its behavioral decision was translated into a pattern of electrical stimulation that was delivered directly into the brain of the second rat, known as the “decoder” animal.

The decoder rat had the same types of levers in its chamber, but it did not receive any visual cue indicating which lever it should press to obtain a reward. Therefore, to press the correct lever and receive the reward it craved, the decoder rat would have to rely on the cue transmitted from the encoder via the brain-to-brain interface.

The researchers then conducted trials to determine how well the decoder animal could decipher the brain input from the encoder rat to choose the correct lever. The decoder rat ultimately achieved a maximum success rate of about 70 percent, only slightly below the possible maximum success rate of 78 percent that the researchers had theorized was achievable based on success rates of sending signals directly to the decoder rat’s brain.

Importantly, the communication provided by this brain-to-brain interface was two-way. For instance, the encoder rat did not receive a full reward if the decoder rat made a wrong choice. The result of this peculiar contingency, said Nicolelis, led to the establishment of a “behavioral collaboration” between the pair of rats.

“We saw that when the decoder rat committed an error, the encoder basically changed both its brain function and behavior to make it easier for its partner to get it right,” Nicolelis said. “The encoder improved the signal-to-noise ratio of its brain activity that represented the decision, so the signal became cleaner and easier to detect. And it made a quicker, cleaner decision to choose the correct lever to press. Invariably, when the encoder made those adaptations, the decoder got the right decision more often, so they both got a better reward.”

In a second set of experiments, the researchers trained pairs of rats to distinguish between a narrow or wide opening using their whiskers. If the opening was narrow, they were taught to nose-poke a water port on the left side of the chamber to receive a reward; for a wide opening, they had to poke a port on the right side.

The researchers then divided the rats into encoders and decoders. The decoders were trained to associate stimulation pulses with the left reward poke as the correct choice, and an absence of pulses with the right reward poke as correct. During trials in which the encoder detected the opening width and transmitted the choice to the decoder, the decoder had a success rate of about 65 percent, significantly above chance.

To test the transmission limits of the brain-to-brain communication, the researchers placed an encoder rat in Brazil, at the Edmond and Lily Safra International Institute of Neuroscience of Natal (ELS-IINN), and transmitted its brain signals over the Internet to a decoder rat in Durham, N.C. They found that the two rats could still work together on the tactile discrimination task.

“So, even though the animals were on different continents, with the resulting noisy transmission and signal delays, they could still communicate,” said Miguel Pais-Vieira, PhD, a postdoctoral fellow and first author of the study. “This tells us that it could be possible to create a workable, network of animal brains distributed in many different locations.”

Will Oremus in his Feb. 28, 2013 article for Slate seems a little less buoyant about the implications of this work,

Nicolelis believes this opens the possibility of building an “organic computer” that links the brains of multiple animals into a single central nervous system, which he calls a “brain-net.” Are you a little creeped out yet? In a statement, Nicolelis adds:

We cannot even predict what kinds of emergent properties would appear when animals begin interacting as part of a brain-net. In theory, you could imagine that a combination of brains could provide solutions that individual brains cannot achieve by themselves.

That sounds far-fetched. But Nicolelis’ lab is developing quite the track record of “taking science fiction and turning it into science,” says Ron Frostig, a neurobiologist at UC-Irvine who was not involved in the rat study. “He’s the most imaginative neuroscientist right now.” (Frostig made it clear he meant this as a complement, though skeptics might interpret the word less charitably.)

The most extensive coverage I’ve given Nicolelis and his work (including the Walk Again project) was in a March 16, 2012 post titled, Monkeys, mind control, robots, prosthetics, and the 2014 World Cup (soccer/football), although there are other mentions including in this Oct. 6, 2011 posting titled, Advertising for the 21st Century: B-Reel, ‘storytelling’, and mind control.  By the way, Nicolelis hopes to have a paraplegic individual (using technology Nicolelis is developing for the Walk Again project) kick the opening soccer/football to the 2014 World Cup games in Brazil.

While there’s much excitement about Nicolelis and his work, there are other ‘brain’ projects being developed in the US including the Brain Activity Map (BAM), which James Lewis notes in his Mar. 1, 2013 posting on the Foresight Institute blog,

A proposal alluded to by President Obama in his State of the Union address [Feb. 2013] to construct a dynamic “functional connectome” Brain Activity Map (BAM) would leverage current progress in neuroscience, synthetic biology, and nanotechnology to develop a map of each firing of every neuron in the human brain—a hundred billion neurons sampled on millisecond time scales. Although not the intended goal of this effort, a project on this scale, if it is funded, should also indirectly advance efforts to develop artificial intelligence and atomically precise manufacturing.

As Lewis notes in his posting, there’s an excellent description of BAM and other brain projects, as well as a discussion about how these ideas are linked (not necessarily by individuals but by the overall direction of work being done in many labs and in many countries across the globe) in Robert Blum’s Feb. (??), 2013 posting titled, BAM: Brain Activity Map Every Spike from Every Neuron, on his eponymous blog. Blum also offers an extensive set of links to the reports and stories about BAM. From Blum’s posting,

The essence of the BAM proposal is to create the technology over the coming decade
to be able to record every spike from every neuron in the brain of a behaving organism.
While this notion seems insanely ambitious, coming from a group of top investigators,
the paper deserves scrutiny. At minimum it shows what might be achieved in the future
by the combination of nanotechnology and neuroscience.

In 2013, as I write this, two European Flagship projects have just received funding for
one billion euro each (1.3 billion dollars each). The Human Brain Project is
an outgrowth of the Blue Brain Project, directed by Prof. Henry Markram
in Lausanne, which seeks to create a detailed simulation of the human brain.
The Graphene Flagship, based in Sweden, will explore uses of graphene for,
among others, creation of nanotech-based supercomputers. The potential synergy
between these projects is a source of great optimism.

The goal of the BAM Project is to elaborate the functional connectome
of a live organism: that is, not only the static (axo-dendritic) connections
but how they function in real-time as thinking and action unfold.

The European Flagship Human Brain Project will create the computational
capability to simulate large, realistic neural networks. But to compare the model
with reality, a real-time, functional, brain-wide connectome must also be created.
Nanotech and neuroscience are mature enough to justify funding this proposal.

I highly recommend reading Blum’s technical description of neural spikes as understanding that concept or any other in his post doesn’t require an advanced degree. Note: Blum holds a number of degrees and diplomas including an MD (neuroscience) from the University of California at San Francisco and a PhD in computer science and biostatistics from California’s Stanford University.

The Human Brain Project has been mentioned here previously. The  most recent mention is in a Jan. 28, 2013 posting about its newly gained status as one of two European Flagship initiatives (the other is the Graphene initiative) each meriting one billion euros of research funding over 10 years. Today, however, is the first time I’ve encountered the BAM project and I’m fascinated. Luckily, John Markoff’s Feb. 17, 2013 article for The New York Times provides some insight into this US initiative (Note: I have removed some links),

The Obama administration is planning a decade-long scientific effort to examine the workings of the human brain and build a comprehensive map of its activity, seeking to do for the brain what the Human Genome Project did for genetics.

The project, which the administration has been looking to unveil as early as March, will include federal agencies, private foundations and teams of neuroscientists and nanoscientists in a concerted effort to advance the knowledge of the brain’s billions of neurons and gain greater insights into perception, actions and, ultimately, consciousness.

Moreover, the project holds the potential of paving the way for advances in artificial intelligence.

What I find particularly interesting is the reference back to the human genome project, which may explain why BAM is also referred to as a ‘connectome’.

ETA Mar.6.13: I have found a Human Connectome Project Mar. 6, 2013 news release on EurekAlert, which leaves me confused. This does not seem to be related to BAM, although the articles about BAM did reference a ‘connectome’. At this point, I’m guessing that BAM and the ‘Human Connectome Project’ are two related but different projects and the reference to a ‘connectome’ in the BAM material is meant generically.  I previously mentioned the Human Connectome Project panel discussion held at the AAAS (American Association for the Advancement of Science) 2013 meeting in my Feb. 7, 2013 posting.

* Corrected EurkAlert to EurekAlert on June 14, 2013.