Tag Archives: Peter Thiel

AI (artificial intelligence) text generator, too dangerous to release?

Could this latest version of OpenAI‘s text generator be so good that it would fool you? And following on that thought could the concomitant reluctance to release the research be real or is it a publicity stunt? Here’s a sample of the text from the GPT2 AI model from a February 15, 2019 article by Mark Fraunfelder for Boing Boing,

Recycling is good for the world.
NO! YOU COULD NOT BE MORE WRONG!!
MODEL COMPLETION (MACHINE-WRITTEN, 25 TRIES)
Recycling is NOT good for the world. It is bad for the environment, it is bad for our health, and it is bad for our economy. I’m not kidding. Recycling is not good for the environment. It is destructive to the earth and it is a major contributor to global warming. Recycling is not good for our health. It contributes to obesity and diseases like heart disease and cancer. Recycling is bad for our economy. It increases the cost of a product, and in turn, the price of everything that is made with that product. Recycling is not good for our nation. We pay a tremendous price for the privilege of having the world’s most advanced and efficient recycling system. Recycling is a huge, colossal waste of time, energy, money, and resources. And THAT is why we need to get back to basics and get back to basics in our recycling efforts. One of the best ways to start is to look at the process of creating a paper product. When you make a paper product, it is basically a long chain of materials. Everything from the raw materials (wood, cardboard, paper, etc.), to the reagents (dyes, solvents, etc.) to the printing equipment (chemicals, glue, paper, ink, etc.), to the packaging, to the packaging materials (mercury, chemicals, etc.) to the processing equipment (heating, cooling, etc.), to the packaging materials, to the packaging materials that are shipped overseas and to the packaging materials that are used in the United States. Each step along the way creates tons of waste that we constantly have to clean up. The process of making a paper product is a very wasteful one. But the end result is something that all of us need to consume. And if we want to keep the recycling process running efficiently, then we really need to think about each and every step that goes into making a paper product.

The first few sentences don’t work for me but once the discussion turns to making paper products, then it becomes more convincing to me. As to whether the company’s reluctance to release the research is genuine or a publicity stunt, I don’t know. However, there was a fair degree of interest in GPT2 after the decision.

From a February 14, 2019 article by Alex Hern for the Guardian,

OpenAI, an nonprofit research company backed by Elon Musk, Reid Hoffman, Sam Altman, and others, says its new AI model, called GPT2 is so good and the risk of malicious use so high that it is breaking from its normal practice of releasing the full research to the public in order to allow more time to discuss the ramifications of the technological breakthrough.

At its core, GPT2 is a text generator. The AI system is fed text, anything from a few words to a whole page, and asked to write the next few sentences based on its predictions of what should come next. The system is pushing the boundaries of what was thought possible, both in terms of the quality of the output, and the wide variety of potential uses.

When used to simply generate new text, GPT2 is capable of writing plausible passages that match what it is given in both style and subject. It rarely shows any of the quirks that mark out previous AI systems, such as forgetting what it is writing about midway through a paragraph, or mangling the syntax of long sentences.

Feed it the opening line of George Orwell’s Nineteen Eighty-Four – “It was a bright cold day in April, and the clocks were striking thirteen” – and the system recognises the vaguely futuristic tone and the novelistic style, and continues with: …

Sean Gallagher’s February 15, 2019 posting on the ars Technica blog provides some insight that’s partially written a style sometimes associated with gossip (Note: Links have been removed),

OpenAI is funded by contributions from a group of technology executives and investors connected to what some have referred to as the PayPal “mafia”—Elon Musk, Peter Thiel, Jessica Livingston, and Sam Altman of YCombinator, former PayPal COO and LinkedIn co-founder Reid Hoffman, and former Stripe Chief Technology Officer Greg Brockman. [emphasis mine] Brockman now serves as OpenAI’s CTO. Musk has repeatedly warned of the potential existential dangers posed by AI, and OpenAI is focused on trying to shape the future of artificial intelligence technology—ideally moving it away from potentially harmful applications.

Given present-day concerns about how fake content has been used to both generate money for “fake news” publishers and potentially spread misinformation and undermine public debate, GPT-2’s output certainly qualifies as concerning. Unlike other text generation “bot” models, such as those based on Markov chain algorithms, the GPT-2 “bot” did not lose track of what it was writing about as it generated output, keeping everything in context.

For example: given a two-sentence entry, GPT-2 generated a fake science story on the discovery of unicorns in the Andes, a story about the economic impact of Brexit, a report about a theft of nuclear materials near Cincinnati, a story about Miley Cyrus being caught shoplifting, and a student’s report on the causes of the US Civil War.

Each matched the style of the genre from the writing prompt, including manufacturing quotes from sources. In other samples, GPT-2 generated a rant about why recycling is bad, a speech written by John F. Kennedy’s brain transplanted into a robot (complete with footnotes about the feat itself), and a rewrite of a scene from The Lord of the Rings.

While the model required multiple tries to get a good sample, GPT-2 generated “good” results based on “how familiar the model is with the context,” the researchers wrote. “When prompted with topics that are highly represented in the data (Brexit, Miley Cyrus, Lord of the Rings, and so on), it seems to be capable of generating reasonable samples about 50 percent of the time. The opposite is also true: on highly technical or esoteric types of content, the model can perform poorly.”

There were some weak spots encountered in GPT-2’s word modeling—for example, the researchers noted it sometimes “writes about fires happening under water.” But the model could be fine-tuned to specific tasks and perform much better. “We can fine-tune GPT-2 on the Amazon Reviews dataset and use this to let us write reviews conditioned on things like star rating and category,” the authors explained.

James Vincent’s February 14, 2019 article for The Verge offers a deeper dive into the world of AI text agents and what makes GPT2 so special (Note: Links have been removed),

For decades, machines have struggled with the subtleties of human language, and even the recent boom in deep learning powered by big data and improved processors has failed to crack this cognitive challenge. Algorithmic moderators still overlook abusive comments, and the world’s most talkative chatbots can barely keep a conversation alive. But new methods for analyzing text, developed by heavyweights like Google and OpenAI as well as independent researchers, are unlocking previously unheard-of talents.

OpenAI’s new algorithm, named GPT-2, is one of the most exciting examples yet. It excels at a task known as language modeling, which tests a program’s ability to predict the next word in a given sentence. Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right prompt.

The writing it produces is usually easily identifiable as non-human. Although its grammar and spelling are generally correct, it tends to stray off topic, and the text it produces lacks overall coherence. But what’s really impressive about GPT-2 is not its fluency but its flexibility.

This algorithm was trained on the task of language modeling by ingesting huge numbers of articles, blogs, and websites. By using just this data — and with no retooling from OpenAI’s engineers — it achieved state-of-the-art scores on a number of unseen language tests, an achievement known as “zero-shot learning.” It can also perform other writing-related tasks, like translating text from one language to another, summarizing long articles, and answering trivia questions.

GPT-2 does each of these jobs less competently than a specialized system, but its flexibility is a significant achievement. Nearly all machine learning systems used today are “narrow AI,” meaning they’re able to tackle only specific tasks. DeepMind’s original AlphaGo program, for example, was able to beat the world’s champion Go player, but it couldn’t best a child at Monopoly. The prowess of GPT-2, say OpenAI, suggests there could be methods available to researchers right now that can mimic more generalized brainpower.

“What the new OpenAI work has shown is that: yes, you absolutely can build something that really seems to ‘understand’ a lot about the world, just by having it read,” says Jeremy Howard, a researcher who was not involved with OpenAI’s work but has developed similar language modeling programs …

To put this work into context, it’s important to understand how challenging the task of language modeling really is. If I asked you to predict the next word in a given sentence — say, “My trip to the beach was cut short by bad __” — your answer would draw upon on a range of knowledge. You’d consider the grammar of the sentence and its tone but also your general understanding of the world. What sorts of bad things are likely to ruin a day at the beach? Would it be bad fruit, bad dogs, or bad weather? (Probably the latter.)

Despite this, programs that perform text prediction are quite common. You’ve probably encountered one today, in fact, whether that’s Google’s AutoComplete feature or the Predictive Text function in iOS. But these systems are drawing on relatively simple types of language modeling, while algorithms like GPT-2 encode the same information in more complex ways.

The difference between these two approaches is technically arcane, but it can be summed up in a single word: depth. Older methods record information about words in only their most obvious contexts, while newer methods dig deeper into their multiple meanings.

So while a system like Predictive Text only knows that the word “sunny” is used to describe the weather, newer algorithms know when “sunny” is referring to someone’s character or mood, when “Sunny” is a person, or when “Sunny” means the 1976 smash hit by Boney M.

The success of these newer, deeper language models has caused a stir in the AI community. Researcher Sebastian Ruder compares their success to advances made in computer vision in the early 2010s. At this time, deep learning helped algorithms make huge strides in their ability to identify and categorize visual data, kickstarting the current AI boom. Without these advances, a whole range of technologies — from self-driving cars to facial recognition and AI-enhanced photography — would be impossible today. This latest leap in language understanding could have similar, transformational effects.

Hern’s article for the Guardian (February 14, 2019 article ) acts as a good overview, while Gallagher’s ars Technical posting (February 15, 2019 posting) and Vincent’s article (February 14, 2019 article) for the The Verge take you progressively deeper into the world of AI text agents.

For anyone who wants to dig down even further, there’s a February 14, 2019 posting on OpenAI’s blog.

The US White House and its Office of Science and Technology Policy (OSTP)

It’s been a while since I first wrote this but I believe this situation has not changed.

There’s some consternation regarding the US Office of Science and Technology Policy’s (OSTP) diminishing size and lack of leadership. From a July 3, 2017 article by Bob Grant for The Scientist (Note: Links have been removed),

Three OSTP staffers did leave last week, but it was because their prearranged tenures at the office had expired, according to an administration official familiar with the situation. “I saw that there were some tweets and what-not saying that it’s zero,” the official tells The Scientist. “That is not true. We have plenty of PhDs that are still on staff that are working on science. All of the work that was being done by the three who left on Friday had been transitioned to other staffers.”

At least one of the tweets that the official is referring to came from Eleanor Celeste, who announced leaving OSTP, where she was the assistant director for biomedical and forensic sciences. “science division out. mic drop,” she tweeted on Friday afternoon.

The administration official concedes that the OSTP is currently in a state of “constant flux” and at a “weird transition period” at the moment, and expects change to continue. “I’m sure that the office will look even more different in three months than it does today, than it did six months ago,” the official says.

Jeffrey Mervis in two articles for Science Magazine is able to provide more detail. From his July 11, 2017 article,

OSTP now has 35 staffers, says an administration official who declined to be named because they weren’t authorized to speak to the media. Holdren [John Holdren], who in January [2017] returned to Harvard University, says the plunge in staff levels is normal during a presidential transition. “But what’s shocking is that, this far into the new administration, the numbers haven’t gone back up.”

The office’s only political appointee is Michael Kratsios, a former aide to Trump confidant and Silicon Valley billionaire Peter Thiel. Kratsios is serving as OSTP’s deputy chief technology officer and de facto OSTP head. Eight new detailees have arrived from other agencies since the inauguration.

Although there has been no formal reorganization of OSTP, a “smaller, more collaborative staff” is now grouped around three areas—science, technology, and national security—according to the Trump aide. Three holdovers from Obama’s OSTP are leading teams focused on specific themes—Lloyd Whitman in technology, Chris Fall in national security, and Deerin Babb-Brott in environment and energy. They report to Kratsios and Ted Wackler, a career civil servant who was Holdren’s deputy chief of staff and who joined OSTP under former President George W. Bush.

“It’s a very flat structure,” says the Trump official, consistent with the administration’s view that “government should be looking for ways to do more with less.” Ultimately, the official adds, “the goal is [for OSTP] to have “probably closer to 50 [people].”

A briefing book prepared by Obama’s outgoing OSTP staff may be a small but telling indication of the office’s current status. The thick, three-ring binder, covering 100 issues, was modeled on one that Holdren received from John “Jack” Marburger, Bush’s OSTP director. “Jack did a fabulous job of laying out what OSTP does, including what reports it owes Congress, so we decided to do likewise,” Holdren says. “But nobody came [from Trump’s transition team] to collect it until a week before the inauguration.”

That person was Reed Cordish, the 43-year-old scion of billionaire real estate developer David Cordish. An English major in college, Reed Cordish was briefly a professional tennis player before joining the family business. He “spent an hour with us and took the book away,” Holdren says. “He told us, ‘This is an important operation and I’ll do my best to see that it flourishes.’ But we don’t know … whether he has the clout to make that happen.”

Cordish is now assistant to the president for intragovernmental and technology initiatives. He works in the new Office of American Innovation led by presidential son-in-law Jared Kushner. That office arranged a recent meeting with high-tech executives, and is also leading yet another White House attempt to “reinvent” government.

Trump has renewed the charter of the National Science and Technology Council, a multiagency group that carries out much of the day-to-day work of advancing the president’s science initiatives. … Still pending is the status of the President’s Council of Advisors on Science and Technology [emphasis mine], a body of eminent scientists and high-tech industry leaders that went out of business at the end of the Obama administration.

Mervis’ July 12, 2017 article is in the form of a Q&A (question and answer) session with the previously mentioned John Holdren, director of the OSTP in Barack Obama’s administration,

Q: Why did you have such a large staff?

A: One reason was to cover the bases. We knew from the start that the Obama administration thought cybersecurity would be an important issue and we needed to be capable in that space. We also knew we needed people who were capable in climate change, in science and technology for economic recovery and job creation and sustained economic growth, and people who knew about advanced manufacturing and nanotechnology and biotechnology.

We also recruited to carry out specific initiatives, like in precision medicine, or combating antibiotic resistance, or the BRAIN [Brain Research through Advancing Innovative Neurotechnologies] initiative. Most of the work will go on in the departments and agencies, but you need someone to oversee it.

The reason we ended up with 135 people at our peak, which was twice the number during its previous peak in the Clinton administration’s second term, was that this president was so interested in knowing what science could do to advance his agenda, on economic recovery, or energy and climate change, or national intelligence. He got it. He didn’t need to be tutored on why science and technology matters.

I feel I’ve been given undue credit for [Obama] being a science geek. It wasn’t me. He came that way. He was constantly asking what we could do to move the needle. When the first flu epidemic, H1N1, came along, the president immediately turned to me and said, “OK, I want [the President’s Council of Advisors on Science and Technology] to look in depth on this, and OSTP, and NIH [National Institutes of Health], and [the Centers for Disease Control and Prevention].” And he told us to coordinate my effort on this stuff—inform me on what can be done and assemble the relevant experts. It was the same with Ebola, with the Macondo oil spill in the Gulf, with Fukushima, where the United States stepped up to work with the Japanese.

It’s not that we had all the expertise. But our job was to reach out to those who did have the relevant expertise.

Q: OSTP now has 35 people. What does that level of staffing say to you?

A: I have to laugh.

Q: Why?

A: When I left, on 19 January [2017], we were down to 30 people. And a substantial fraction of the 30 were people who, in a sense, keep the lights on. They were the OSTP general counsel and deputy counsel, the security officer and deputy, the budget folks, the accounting folks, the executive director of NSTC [National Science and Technology Council].

There are some scientists left, and there are some scientists there still. But on 30 June the last scientist in the science division left.

Somebody said OSTP has shut down. But that’s not quite it. There was no formal decision to shut anything down. But they did not renew the contract of the last remaining science folks in the science division.

I saw somebody say, “Well, we still have some Ph.D.s left.” And that’s undoubtedly true. There are still some science Ph.D.s left in the national security and international affairs division. But because [OSTP] is headless, they have no direct connection to the president and his top advisers.

I don’t want to disparage the top people there. The top people there now are Michael Kratsios, who they named the deputy chief technology officer, and Ted Wackler, who was my deputy chief of staff and who was [former OSTP Director] Jack Marberger’s deputy, and who I kept because he’s a fabulously effective manager. And I believe that they are doing everything they can to make sure that OSTP, at the very least, does the things it has to do. … But right now I think OSTP is just hanging on.

Q: Why did some people choose to stay on?

A: A large portion of OSTP staff are borrowed from other agencies, and because the White House is the White House, we get the people we need. These are dedicated folks who want to get the job done. They want to see science and technology applied to advance the public interest. And they were willing to stay and do their best despite the considerable uncertainty about their future.

But again, most of the detailees, and the reason we went from 135 to 30 almost overnight, is that it’s pretty standard for the detailees to go back to their home agencies and wait for the next administration to decide what set of detailees it wants to advance their objects.

So there’s nothing shocking that most of the detailees went back to their home agencies. The people who stayed are mostly employed directly by OSTP. What’s shocking is that, this far into the new administration, that number hasn’t gone back up. That is, they have only five more people than they had on January 20 [2017].

As I had been wondering about the OSTP and about the President’s Council of Advisors on Science and Technology (PCAST), it was good to get an update.

On a more parochial note, we in Canada are still waiting for an announcement about who our Chief Science Advisor might be.

China, US, and the race for artificial intelligence research domination

John Markoff and Matthew Rosenberg have written a fascinating analysis of the competition between US and China regarding technological advances, specifically in the field of artificial intelligence. While the focus of the Feb. 3, 2017 NY Times article is military, the authors make it easy to extrapolate and apply the concepts to other sectors,

Robert O. Work, the veteran defense official retained as deputy secretary by President Trump, calls them his “A.I. dudes.” The breezy moniker belies their serious task: The dudes have been a kitchen cabinet of sorts, and have advised Mr. Work as he has sought to reshape warfare by bringing artificial intelligence to the battlefield.

Last spring, he asked, “O.K., you guys are the smartest guys in A.I., right?”

No, the dudes told him, “the smartest guys are at Facebook and Google,” Mr. Work recalled in an interview.

Now, increasingly, they’re also in China. The United States no longer has a strategic monopoly on the technology, which is widely seen as the key factor in the next generation of warfare.

The Pentagon’s plan to bring A.I. to the military is taking shape as Chinese researchers assert themselves in the nascent technology field. And that shift is reflected in surprising commercial advances in artificial intelligence among Chinese companies. [emphasis mine]

Having read Marshal McLuhan (de rigeur for any Canadian pursuing a degree in communications [sociology-based] anytime from the 1960s into the late 1980s [at least]), I took the movement of technology from military research to consumer applications as a standard. Television is a classic example but there are many others including modern plastic surgery. The first time, I encountered the reverse (consumer-based technology being adopted by the military) was in a 2004 exhibition “Massive Change: The Future of Global Design” produced by Bruce Mau for the Vancouver (Canada) Art Gallery.

Markoff and Rosenberg develop their thesis further (Note: Links have been removed),

Last year, for example, Microsoft researchers proclaimed that the company had created software capable of matching human skills in understanding speech.

Although they boasted that they had outperformed their United States competitors, a well-known A.I. researcher who leads a Silicon Valley laboratory for the Chinese web services company Baidu gently taunted Microsoft, noting that Baidu had achieved similar accuracy with the Chinese language two years earlier.

That, in a nutshell, is the challenge the United States faces as it embarks on a new military strategy founded on the assumption of its continued superiority in technologies such as robotics and artificial intelligence.

First announced last year by Ashton B. Carter, President Barack Obama’s defense secretary, the “Third Offset” strategy provides a formula for maintaining a military advantage in the face of a renewed rivalry with China and Russia.

As consumer electronics manufacturing has moved to Asia, both Chinese companies and the nation’s government laboratories are making major investments in artificial intelligence.

The advance of the Chinese was underscored last month when Qi Lu, a veteran Microsoft artificial intelligence specialist, left the company to become chief operating officer at Baidu, where he will oversee the company’s ambitious plan to become a global leader in A.I.

The authors note some recent military moves (Note: Links have been removed),

In August [2016], the state-run China Daily reported that the country had embarked on the development of a cruise missile system with a “high level” of artificial intelligence. The new system appears to be a response to a missile the United States Navy is expected to deploy in 2018 to counter growing Chinese military influence in the Pacific.

Known as the Long Range Anti-Ship Missile, or L.R.A.S.M., it is described as a “semiautonomous” weapon. According to the Pentagon, this means that though targets are chosen by human soldiers, the missile uses artificial intelligence technology to avoid defenses and make final targeting decisions.

The new Chinese weapon typifies a strategy known as “remote warfare,” said John Arquilla, a military strategist at the Naval Post Graduate School in Monterey, Calif. The idea is to build large fleets of small ships that deploy missiles, to attack an enemy with larger ships, like aircraft carriers.

“They are making their machines more creative,” he said. “A little bit of automation gives the machines a tremendous boost.”

Whether or not the Chinese will quickly catch the United States in artificial intelligence and robotics technologies is a matter of intense discussion and disagreement in the United States.

Markoff and Rosenberg return to the world of consumer electronics as they finish their article on AI and the military (Note: Links have been removed),

Moreover, while there appear to be relatively cozy relationships between the Chinese government and commercial technology efforts, the same cannot be said about the United States. The Pentagon recently restarted its beachhead in Silicon Valley, known as the Defense Innovation Unit Experimental facility, or DIUx. It is an attempt to rethink bureaucratic United States government contracting practices in terms of the faster and more fluid style of Silicon Valley.

The government has not yet undone the damage to its relationship with the Valley brought about by Edward J. Snowden’s revelations about the National Security Agency’s surveillance practices. Many Silicon Valley firms remain hesitant to be seen as working too closely with the Pentagon out of fear of losing access to China’s market.

“There are smaller companies, the companies who sort of decided that they’re going to be in the defense business, like a Palantir,” said Peter W. Singer, an expert in the future of war at New America, a think tank in Washington, referring to the Palo Alto, Calif., start-up founded in part by the venture capitalist Peter Thiel. “But if you’re thinking about the big, iconic tech companies, they can’t become defense contractors and still expect to get access to the Chinese market.”

Those concerns are real for Silicon Valley.

If you have the time, I recommend reading the article in its entirety.

Impact of the US regime on thinking about AI?

A March 24, 2017 article by Daniel Gross for Slate.com hints that at least one high level offician in the Trump administration may be a little naïve in his understanding of AI and its impending impact on US society (Note: Links have been removed),

Treasury Secretary Steven Mnuchin is a sharp guy. He’s a (legacy) alumnus of Yale and Goldman Sachs, did well on Wall Street, and was a successful movie producer and bank investor. He’s good at, and willing to, put other people’s money at risk alongside some of his own. While he isn’t the least qualified person to hold the post of treasury secretary in 2017, he’s far from the best qualified. For in his 54 years on this planet, he hasn’t expressed or displayed much interest in economic policy, or in grappling with the big picture macroeconomic issues that are affecting our world. It’s not that he is intellectually incapable of grasping them; they just haven’t been in his orbit.

Which accounts for the inanity he uttered at an Axios breakfast Friday morning about the impact of artificial intelligence on jobs.

“it’s not even on our radar screen…. 50-100 more years” away, he said. “I’m not worried at all” about robots displacing humans in the near future, he said, adding: “In fact I’m optimistic.”

A.I. is already affecting the way people work, and the work they do. (In fact, I’ve long suspected that Mike Allen, Mnuchin’s Axios interlocutor, is powered by A.I.) I doubt Mnuchin has spent much time in factories, for example. But if he did, he’d see that machines and software are increasingly doing the work that people used to do. They’re not just moving goods through an assembly line, they’re soldering, coating, packaging, and checking for quality. Whether you’re visiting a GE turbine plant in South Carolina, or a cable-modem factory in Shanghai, the thing you’ll notice is just how few people there actually are. It’s why, in the U.S., manufacturing output rises every year while manufacturing employment is essentially stagnant. It’s why it is becoming conventional wisdom that automation is destroying more manufacturing jobs than trade. And now we are seeing the prospect of dark factories, which can run without lights because there are no people in them, are starting to become a reality. The integration of A.I. into factories is one of the reasons Trump’s promise to bring back manufacturing employment is absurd. You’d think his treasury secretary would know something about that.

It goes far beyond manufacturing, of course. Programmatic advertising buying, Spotify’s recommendation engines, chatbots on customer service websites, Uber’s dispatching system—all of these are examples of A.I. doing the work that people used to do. …

Adding to Mnuchin’s lack of credibility on the topic of jobs and robots/AI, Matthew Rozsa’s March 28, 2017 article for Salon.com features a study from the US National Bureau of Economic Research (Note: Links have been removed),

A new study by the National Bureau of Economic Research shows that every fully autonomous robot added to an American factory has reduced employment by an average of 6.2 workers, according to a report by BuzzFeed. The study also found that for every fully autonomous robot per thousand workers, the employment rate dropped by 0.18 to 0.34 percentage points and wages fell by 0.25 to 0.5 percentage points.

I can’t help wondering if the US Secretary of the Treasury is so oblivious to what is going on in the workplace whether that’s representative of other top-tier officials such as the Secretary of Defense, Secretary of Labor, etc. What is going to happen to US research in fields such as robotics and AI?

I have two more questions, in future what happens to research which contradicts or makes a top tier Trump government official look foolish? Will it be suppressed?

You can find the report “Robots and Jobs: Evidence from US Labor Markets” by Daron Acemoglu and Pascual Restrepo. NBER (US National Bureau of Economic Research) WORKING PAPER SERIES (Working Paper 23285) released March 2017 here. The introduction featured some new information for me; the term ‘technological unemployment’ was introduced in 1930 by John Maynard Keynes.

Moving from a wholly US-centric view of AI

Naturally in a discussion about AI, it’s all US and the country considered its chief sceince rival, China, with a mention of its old rival, Russia. Europe did rate a mention, albeit as a totality. Having recently found out that Canadians were pioneers in a very important aspect of AI, machine-learning, I feel obliged to mention it. You can find more about Canadian AI efforts in my March 24, 2017 posting (scroll down about 40% of the way) where you’ll find a very brief history and mention of the funding for a newly launching, Pan-Canadian Artificial Intelligence Strategy.

If any of my readers have information about AI research efforts in other parts of the world, please feel free to write them up in the comments.

Commercializing nanotechnology: Peter Thiel’s Breakout Labs and Argonne National Laboratories

Breakout Labs

I last wrote about entrepreneur Peter Thiel’s Breakout Labs project in an Oct. 26, 2011 posting announcing its inception. An Oct. 6, 2015 Breakout Labs news release (received in my email) highlights a funding announcement for four startups of which at least three are nanotechnology-enabled,

Breakout Labs, a program of Peter Thiel’s philanthropic organization, the Thiel Foundation, announced today that four new companies advancing scientific discoveries in biomedical, chemical engineering, and nanotechnology have been selected for funding.

“We’re always hearing about bold new scientific research that promises to transform the world, but far too often the latest discoveries are left withering in a lab,” said Lindy Fishburne, Executive Director of Breakout Labs. “Our mission is to help a new type of scientist-entrepreneur navigate the startup ecosystem and build lasting companies that can make audacious scientific discoveries meaningful to everyday life. The four new companies joining the Breakout Labs portfolio – nanoGriptech, Maxterial, C2Sense, and CyteGen – embody that spirit and we’re excited to be working with them to help make their vision a reality.”

The future of adhesives: inspired by geckos

Inspired by the gecko’s ability to scuttle up walls and across ceilings due to their millions of micro/nano foot-hairs,nanoGriptech (http://nanogriptech.com/), based in Pittsburgh, Pa., is developing a new kind of microfiber adhesive material that is strong, lightweight, and reusable without requiring glues or producing harmful residues. Currently being tested by the U.S. military, NASA, and top global brands, nanoGriptech’s flagship product Setex™ is the first adhesive product of its kind that is not only strong and durable, but can also be manufactured at low cost, and at scale.

“We envision a future filled with no-leak biohazard enclosures, ergonomic and inexpensive car seats, extremely durable aerospace adhesives, comfortable prosthetic liners, high performance athletic wear, and widely available nanotechnology-enabled products manufactured less expensively — all thanks to the grippy little gecko,” said Roi Ben-Itzhak, CFO and VP of Business Development for nanoGriptech.

A sense of smell for the digital world

Despite the U.S. Department of Agriculture’s recent goals to drastically reduce food waste, most consumers don’t realize the global problem created by 1.3 billion metric tons of food wasted each year — clogging landfills and releasing unsustainable levels of methane gas into the atmosphere. Using technology developed at MIT’s Swager lab, Cambridge, Ma.-based C2Sense(http://www.c2sense.com/) is developing inexpensive, lightweight hand-held sensors based on carbon nanotubes which can detect fruit ripeness and meat, fish and poultry freshness. Smaller than a half of a business card, these sensors can be developed at very low cost, require very little power to operate, and can be easily integrated into most agricultural supply chains, including food storage packaging, to ensure that food is picked, stored, shipped, and sold at optimal freshness.

“Our mission is to bring a sense of smell to the digital world. With our technology, that package of steaks in your refrigerator will tell you when it’s about to go bad, recommend some recipe options and help build out your shopping list,” said Jan Schnorr, Chief Technology Officer of C2Sense.

Amazing metals that completely repel water

MaxterialTM, Inc. develops amazing materials that resist a variety of detrimental environmental effects through technology that emulates similar strategies found in nature, such as the self-cleaning lotus leaf and antifouling properties of crabs. By modifying the surface shape or texture of a metal, through a method that is very affordable and easy to introduce into the existing manufacturing process, Maxterial introduces a microlayer of air pockets that reduce contact surface area. The underlying material can be chemically the same as ever, retaining inherent properties like thermal and electrical conductivity. But through Maxterial’s technology, the metallic surface also becomes inherently water repellant. This property introduces the superhydrophobic maxterial as a potential solution to a myriad of problems, such as corrosion, biofouling, and ice formation. Maxterial is currently focused on developing durable hygienic and eco-friendly anti-corrosion coatings for metallic surfaces.

“Our process has the potential to create metallic objects that retain their amazing properties for the lifetime of the object – this isn’t an aftermarket coating that can wear or chip off,” said Mehdi Kargar, Co-founder and CEO of Maxterial, Inc. “We are working towards a day when shipping equipment can withstand harsh arctic environments, offshore structures can resist corrosion, and electronics can be fully submersible and continue working as good as new.”

New approaches to combat aging

CyteGen (http://cytegen.com/) wants to dramatically increase the human healthspan, tackle neurodegenerative diseases, and reverse age-related decline. What makes this possible now is new discovery tools backed by the dream team of interdisciplinary experts the company has assembled. CyteGen’s approach is unusually collaborative, tapping into the resources and expertise of world-renowned researchers across eight major universities to focus different strengths and perspectives to achieve the company’s goals. By approaching aging from a holistic, systematic point of view, rather than focusing solely on discrete definitions of disease, they have developed a new way to think about aging, and to develop treatments that can help people live longer, healthier lives.

“There is an assumption that aging necessarily brings the kind of physical and mental decline that results in Parkinson’s, Alzheimer’s, and other diseases. Evidence indicates otherwise, which is what spurred us to launch CyteGen,” said George Ugras, Co-Founder and President of CyteGen.

To date, Breakout Labs has invested in more than two dozen companies at the forefront of science, helping radical technologies get beyond common hurdles faced by early stage companies, and advance research and development to market much more quickly. Portfolio companies have raised more than six times the amount of capital invested in the program by the Thiel Foundation, and represent six Series A valuations ranging from $10 million to $60 million as well as one acquisition.

You can see the original Oct. 6, 2015 Breakout Labs news release here or in this Oct. 7, 2015 news item on Azonano.

Argonne National Labs and Nano Design Works (NDW) and the Argonne Collaborative Center for Energy Storage Science (ACCESS)

The US Department of Energy’s Argonne National Laboratory’s Oct. 6, 2015 press release by Greg Cunningham announced two initiatives meant to speed commercialization of nanotechnology-enabled products for the energy storage and other sectors,

Few technologies hold more potential to positively transform our society than energy storage and nanotechnology. Advances in energy storage research will revolutionize the way the world generates and stores energy, democratizing the delivery of electricity. Grid-level storage can help reduce carbon emissions through the increased adoption of renewable energy and use of electric vehicles while helping bring electricity to developing parts of the world. Nanotechnology has already transformed the electronics industry and is bringing a new set of powerful tools and materials to developers who are changing everything from the way energy is generated, stored and transported to how medicines are delivered and the way chemicals are produced through novel catalytic nanomaterials.

Recognizing the power of these technologies and seeking to accelerate their impact, the U.S. Department of Energy’s Argonne National Laboratory has created two new collaborative centers that provide an innovative pathway for business and industry to access Argonne’s unparalleled scientific resources to address the nation’s energy and national security needs. These centers will help speed discoveries to market to ensure U.S. industry maintains a lead in this global technology race.

“This is an exciting time for us, because we believe this new approach to interacting with business can be a real game changer in two areas of research that are of great importance to Argonne and the world,” said Argonne Director Peter B. Littlewood. “We recognize that delivering to market our breakthrough science in energy storage and nanotechnology can help ensure our work brings the maximum benefit to society.”

Nano Design Works (NDW) and the Argonne Collaborative Center for Energy Storage Science (ACCESS) will provide central points of contact for companies — ranging from large industrial entities to smaller businesses and startups, as well as government agencies — to benefit from Argonne’s world-class expertise, scientific tools and facilities.

NDW and ACCESS represent a new way to collaborate at Argonne, providing a single point of contact for businesses to assemble tailored interdisciplinary teams to address their most challenging R&D questions. The centers will also provide a pathway to Argonne’s fundamental research that is poised for development into practical products. The chance to build on existing scientific discovery is a unique opportunity for businesses in the nano and energy storage fields.

The center directors, Andreas Roelofs of NDW and Jeff Chamberlain of ACCESS, have both created startups in their careers and understand the value that collaboration with a national laboratory can bring to a company trying to innovate in technologically challenging fields of science. While the new centers will work with all sizes of companies, a strong emphasis will be placed on helping small businesses and startups, which are drivers of job creation and receive a large portion of the risk capital in this country.

“For a startup like mine to have the ability to tap the resources of a place like Argonne would have been immensely helpful,” said Roelofs. “We”ve seen the power of that sort of access, and we want to make it available to the companies that need it to drive truly transformative technologies to market.”

Chamberlain said his experience as an energy storage researcher and entrepreneur led him to look for innovative approaches to leveraging the best aspects of private industry and public science. The national laboratory system has a long history of breakthrough science that has worked its way to market, but shortening that journey from basic research to product has become a growing point of emphasis for the national laboratories over the past couple of decades. The idea behind ACCESS and NDW is to make that collaboration even easier and more powerful.

“Where ACCESS and NDW will differ from the conventional approach is through creating an efficient way for a business to build a customized, multi-disciplinary team that can address anything from small technical questions to broad challenges that require massive resources,” Chamberlain said. “That might mean assembling a team with chemists, physicists, computer scientists, materials engineers, imaging experts, or mechanical and electrical engineers; the list goes on and on. It’s that ability to tap the full spectrum of cross-cutting expertise at Argonne that will really make the difference.”

Chamberlain is deeply familiar with the potential of energy storage as a transformational technology, having led the formation of Argonne’s Joint Center for Energy Storage Research (JCESR). The center’s years-long quest to discover technologies beyond lithium-ion batteries has solidified the laboratory’s reputation as one of the key global players in battery research. ACCESS will tap Argonne’s full battery expertise, which extends well beyond JCESR and is dedicated to fulfilling the promise of energy storage.

Energy storage research has profound implications for energy security and national security. Chamberlain points out that approximately 1.3 billion people across the globe do not have access to electricity, with another billion having only sporadic access. Energy storage, coupled with renewable generation like solar, could solve that problem and eliminate the need to build out massive power grids. Batteries also have the potential to create a more secure, stable grid for countries with existing power systems and help fight global climate disruption through adoption of renewable energy and electric vehicles.

Argonne researchers are pursuing hundreds of projects in nanoscience, but some of the more notable include research into targeted drugs that affect only cancerous cells; magnetic nanofibers that can be used to create more powerful and efficient electric motors and generators; and highly efficient water filtration systems that can dramatically reduce the energy requirements for desalination or cleanup of oil spills. Other researchers are working with nanoparticles that create a super-lubricated state and other very-low friction coatings.

“When you think that 30 percent of a car engine’s power is sacrificed to frictional loss, you start to get an idea of the potential of these technologies,” Roelofs said. “But it’s not just about the ideas already at Argonne that can be brought to market, it’s also about the challenges for businesses that need Argonne-level resources. I”m convinced there are many startups out there working on transformational ideas that can greatly benefit from the help of a place Argonne to bring those ideas to fruition. That is what has me excited about ACCESS and NDW.”

For more information on ACCESS, see: access.anl.gov

For more information on NDW, see: nanoworks.anl.gov

You can read more about the announcement in an Oct. 6, 2015 article by Greg Watry for R&D magazine featuring an interview with Andreas Roelofs.

A tooth and art installation in Vancouver (Canada) and bodyhacking and DIY (do-it-yourself) culture in the US

After a chat with artist David Khang, about various mergings of flesh and nonliving entities, I saw his installation, Amelogenesis Imperfecta (How Deep is the Skin of Teeth)  at Vancouver’s grunt gallery with  an enhanced appreciation for the shadowy demarcation between living entities (human and nonhuman) and between living and nonliving entities (this was à propos the work being done at the SymbioticA Centre in Australia, which is mentioned in the following excerpt) and some of the social and ethical questions that arise. Robin Laurence in her Sept. 13, 2012 article for the Georgia Straight newspaper/website describes both the installation and its influences,

With Khang’s newly launched works, Amelogenesis Imperfecta (How Deep Is the Skin of Teeth), on view at the grunt gallery until September 22, and Beautox Me, at CSA Space [#5–2414 Main Street] through October 7, he has again found formally and intellectually complex ways to meld his seemingly disparate professions. The grunt gallery installation includes microscopic laser drawings on epithelial cells and an animated short of a human tooth evolving into a fearsome, all-devouring shark. This work developed out of experiments Khang conducted during his 2010 residency at SymbioticA Centre for Biological Arts in Perth, Australia. “It began as a goal-oriented project to manufacture enamel,” he says, “but ended up being a meditation on ethical interspecies relations.” Fetal calf serum, he explains, is used “to fuel” all stem-cell research.

In our far ranging discussion, Khang (whose show at the Grunt [350 E. 2nd Avenue, Vancouver, ends on Saturday, Sept. 22, 2012) and I discussed not only interspecies relations but also the integration of flesh with machine/technology,which is being explored and discussed at SymbioticA and elsewhere.

Coincidentally, one day after my chat with Khang I found this Sept. 19, 2012 article (Biohackers And DIY Cyborgs Clone Silicon Valley Innovation) by Neal Ungerleider for Fast Company (Note: I have removed links),

The grinders (DIY cybernetics enthusiasts) and their comrades in arms–biohackers working on improving human source code, quantified self enthusiasts who arm themselves with constant bodily data feeds, and independent DIY biotechnology enthusiasts–are moonlighting for now in basements, shared spaces, and makeshift labs. But they’re ultimately aiming to change the world. Think of how bionic [sic] legs like those belonging to Oscar Pistorius and cochlear implants that let the deaf hear have changed everyday life for so many people. Then multiply that by a million. A million people. And millions of dollars.

Not only has the new wave of do-it-yourself (DIY) cybernetics moved well beyond science fiction, it’s going to cause a business boom in the not-too-distant future.

I have two comments. (1) Pistorius does not have bionic legs but he does use some very high tech racing prosthetics, which I describe briefly in my July 27, 2009 posting in part 4 of a series on human enhancement. On the basis of this error, you may want to apply a little caution when reading the rest of Ungerleider’s  article. (2) Prior to this article, I hadn’t considered machine/flesh integration as a business opportunity but clearly I’ve been shortsighted.

I was particularly interested in this following passage where Ungerleider mentions the fusion of the living and of the electronic.

In Brooklyn, a small “community biolab” called Genspace is home to approximately a dozen DIY biology experimenters whose work often involves the fusion of the living and the electronic. Classes are offered to the public in synthetic biology, which engineers living organisms as if they were biological machines.

A workshop recently held at Genspace, Crude Control, showed how in-vitro meat and leather could be created via tissue engineering, and it explored the possibility of creating semi-living “products” from them. Although the Genspace workshop was for educational purposes, similar technologies are already being monetized elsewhere–Peter Thiel recently sank six figures into a startup that will make 3-D printed in vitro meat commercially available.

The teacher at the Crude Control workshop, Oron Catts, [emphasis mine] walked participants through “basic tissue culture and tissue engineering protocols, including developing some DIY tools and isolating cells from a bone we got from a local butcher.” Some of Catts’ previous projects include bioengineering a steak from pre-natal sheep cells (in his words, “steak grown from an animal that was not yet born“) and victimless leather grown from cell lines. [emphases mine]
 

I emphasized Oron Catts because he is SymbioticA Centre’s director.From his biographical page on the SynbioticA Centre website,

Oron Catts is an artist, researcher and curator whose work with the Tissue Culture and Art Project (which he founded in 1996 with Ionat Zurr) is part of the NY MoMA design collection and has been exhibited and presented internationally. In 2000 he co-founded SymbioticA, an artistic research laboratory housed within the School of Anatomy and Human Biology, The University of Western Australia. Under Oron’s leadership, SymbioticA has gone on to win the Prix Ars Electronica Golden Nica in Hybrid Art (2007) and became a Centre for Excellence in 2008.

Oron has been a researcher at The University of Western Australia since 1996 and was a Research Fellow at the Tissue Engineering and Organ Fabrication Laboratory, Harvard Medical School, Massachusetts General Hospital, Boston from 2000-2001. He worked with numerous other bio-medical laboratories around the world. In 2007 he was a visiting Scholar at the Department of Art and Art History, Stanford University. He is currently undertaking a “Synthetic Atheistic” residency which is jointly funded by the National Science Foundation (USA) and the Engineering and Physical Sciences Research Council (UK) to exploring the impactions of synthetic Biology; and is a Visiting Professor of Design Interaction, Royal College of Arts, London.

You can find out more about the SymbioticA Centre here.

As for the “steak grown from an animal that was not yet born” and “victimless leather,” the terminology hints   while the description of the work demonstrates how close we are to a new reality in our relationships with nonhumans. Some readers may find the rest of Ungerleider’s article even more eyebrow-raising/disturbing/exciting.

Transforming tomorrow (?)—Peter Thiel and his 20 under 20 visionaries

In my Dec. 19, 2011 posting, I wrote about a rapidly approaching deadline for Peter Thiel’s fellowship programme for people under the age of 20. Unusually, I now have some followup information, thanks to an Aug. 13, 2012 posting on the Foresight Institute blog. The finalists, 40 from over 1000 applicants, are appearing in a CNBC documentary, 20 Under 20, Transforming Tomorrow.

The documentary is airing in two parts on Aug. 13, 2012 for part one and Aug. 14, 2012 for a rerun of part one prior to airing part two which features (from the Foresight Institute posting),

Foresight CoFounder Christine Peterson and Director Desiree Dudley will appear in their role as mentors for the Thiel Foundation’s 20Under20 in CNBC’s documentary “20Under20: Transforming Tomorrow”. See these brilliant young people in CNBC’s upcoming documentary, 9-11pm EDT this Tuesday, August 14th! (It’s a 2-part documentary; the 1st episode actually first airs at 10pm EDT on Monday, but re-airs at 9pm EDT Tuesday before the second part at 10pm EDT.) Video trailer: http://youtu.be/F_YR7sfXjl0

If you prefer, you can watch the video here,

This is a very slick production and, maybe it’s the Canadian in me, the arrogance is breathtaking. It’s to be expected in the under-20 group and necessary if they are to deal with complex issues. The more experienced entrepreneurs are the ones I find shocking and somewhat out of touch.  For example, it’s widely recognized that there are a number of possible causes for cancer, as such, there is no single ‘cure for cancer’ or ‘magic bullet’ despite what one of the entrepreneurs suggests in this trailer.

Visionary (under the age of 20) needed by Dec. 31, 2011

The folks over at the Foresight Institute made note of a competition for visionaries under the age of 20 in a Dec. 16, 2011 posting,

The future will not take care of itself. Global prosperity is not inevitable. The world will only get better if visionary people are creative and relentless about solving hard problems.

The 2011 class of Thiel Fellows includes 24 people who are tackling breakthroughs in hardware and robotics, making energy plentiful, making markets more effective, challenging the notion that there is only one way to get an education, and extending the human lifespan. Several of them have already launched companies, secured financing, and won prestigious awards. As they’re demonstrating, you don’t need college to invent the future (you can read about their progress in a recent article in TechCrunch).

If you’re under twenty and love science or technology, we hope you’ll consider joining the 2012 class of fellows. Go to ThielFellowship.org and apply to change the world. There’s no cost to apply, and they’re accepting applications through December 31. Fellows will be appointed this spring and begin two-year fellowships this summer.

The Thiel Foundation does invite foreign candidates to apply (from the FAQ page)

May foreign candidates apply?

Yes. We encourage candidates from around the world to apply. Foreign candidates are responsible for obtaining their own visa into the United States. If you receive a fellowship you must be up for navigating this challenge.

Here’s a bit more about the current crop of fellows. From Rip Emerson’s Dec. 8, 2011 article for TechCrunch,

In May, Thiel [Peter Thiel], along with his Foundation, put their money where their mouth is, announcing the “20 Under 20 Thiel Fellowship”, a program that offers talented independent thinkers under the age of 20 $100,000 and two years free of school to pursue their entrepreneurial endeavors. The program launched with 24 Thiel Fellows, each of these wiz kids pursuing their own inspiring scientific and technical projects. …

I did a writeup about another of Peter Thiel’s (founder of Paypal), an entrepreneurial fund for scientists in my April 26, 2011 posting.

Thiel certainly seems to be interested in stimulating new prospects for the future.

Entrepreneurial scientists: there’s a new fund for you

The San Francisco-based Thiel Foundation announced today that it will be offering funds to entrepreneurial-minded scientists for early stage science and technology research via Breakout Labs. From the Oct. 25, 2011 article by Anya Kamenetz for Fast Company,

Last seen paying kids to drop out of college and starting his own private island nation, PayPal founder Peter Thiel has announced a new philanthropic venture that sounds a little more reasonable. Breakout Labs, Thiel said at a speech at Stanford, would grant $50,000 to $350,000 in funding to “entrepreneurial” scientists–those completely independent of typical research institutions–for very early projects that may even be pre-proof of concept. Some of the money must be paid forward through revenue-sharing agreements with Breakout Labs, and the scientists must pursue patents or publish their findings in open-access journals like PLoS [Public Library of Science], Creative Commons-style.

There’s more information in the Oct. 25, 2011 media release on the Thiel Foundation website,

Calling for more rapid innovation in science and technology, Peter Thiel today launched a new program of the Thiel Foundation, Breakout Labs. Speaking at Stanford to an event organized by the Business Association of Stanford Entrepreneurial Students, Thiel announced that Breakout Labs will use a revolving fund to improve the way early-stage science and technology research is funded by helping independent scientists and early-stage companies develop their most radical ideas.

“Some of the world’s most important technologies were created by independent minds working long nights in garage labs,” said Thiel. “But when their ideas are too new, unproven, or unpopular, these visionaries can find it difficult to obtain support. Through Breakout Labs, we’re going to create opportunities for revolutionary science by cultivating an entrepreneurial research model that prizes extreme creativity and bold thinking.”

With venture capital shifting to later and later stages of development and commercialization, and with ever shorter investment time horizons, there are few available means of support for independent early-stage development of science and technology. But many of these technologies are ripe for the same kind of innovations that began in computing during the 1970s, when small, visionary start-ups began to take on industry giants who wielded much bigger research and development budgets. Breakout Labs will accelerate this trend.

“Venture capital firms look for research that can be brought to market within five to seven years, and major funders like the National Institutes of Health have a low tolerance for radical ideas,” said Breakout Labs founder and executive director Lindy Fishburne. “At Breakout Labs, we’re looking for ideas that are too ahead of their time for traditional funding sources, but represent the first step toward something that, if successful, would be groundbreaking.”

Then there’s the Programs page of the Breakout Lab’s website,

Breakout Labs is a bold re-envisioning of the way early-stage science gets funded, allowing independent researchers and early-stage companies to test their most radical ideas. We invite individuals, teams of individuals, and early stage companies from around the world to apply for funding of a specific project that would push the limits of science and technology.

It’s unusual to see a funding program that isn’t constrained by nationality or country of residence. Another unusual feature is that  revenue sharing is being built into relationship,

Breakout Labs offers two types of revenue sharing agreements:

  • Funded companies retain IP that arises from the project and commit a modest royalty stream and an option for a small investment in their company to Breakout Labs.
  • Funded researchers assign project IP to Breakout Labs in exchange for a substantial royalty stream from any future revenue generated by successful commercialization of the IP.

Key to support from Breakout Labs is an agreement that maximizes the dissemination of the resulting innovations, either through publication or intellectual property development.

Good luck to all the entrepeneurial scientists out there!