Tag Archives: MIT

Graphite ‘gold’ rush?

Someone in Germany (I think) is very excited about graphite, more specifically, there’s excitement around graphite flakes located in the province of Québec, Canada. Although, the person who wrote this news release might have wanted to run a search for ‘graphite’ and ‘gold rush’. The last graphite gold rush seems to have taken place in 2013.

Here’s the March 1, 2018 news release on PR Newswire (Cision), Note: Some links have been removed),

PALM BEACH, Florida, March 1, 2018 /PRNewswire/ —

MarketNewsUpdates.com News Commentary

Much like the gold rush in North America in the 1800s, people are going out in droves searching for a different kind of precious metal, graphite. The thing your third grade pencils were made of is now one of the hottest commodities on the market. This graphite is not being mined by your run-of-the-mill old-timey soot covered prospectors anymore. Big mining companies are all looking for this important resource integral to the production of lithium ion batteries due to the rise in popularity of electric cars. These players include Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), Teck Resources Limited (NYSE: TECK), Nemaska Lithium (TSX: NMX), Lithium Americas Corp. (TSX: LAC), and Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF).

These companies looking to manufacturer their graphite-based products, have seen steady positive growth over the past year. Their development of cutting-edge new products seems to be paying off. But in order to continue innovating, these companies need the graphite to do it. One junior miner looking to capitalize on the growing demand for this commodity is Graphite Energy Corp.

Graphite Energy is a mining company, that is focused on developing graphite resources. Graphite Energy’s state-of-the-art mining technology is friendly to the environment and has indicate graphite carbon (Cg) in the range of 2.20% to 22.30% with average 10.50% Cg from their Lac Aux Bouleaux Graphite Property in Southern Quebec [Canada].

Not Just Any Graphite Will Do

Graphite is one of the most in demand technology metals that is required for a green and sustainable world. Demand is only set to increase as the need for lithium ion batteries grows, fueled by the popularity of electric vehicles. However, not all graphite is created equal. The price of natural graphite has more than doubled since 2013 as companies look to maintain environmental standards which the use of synthetic graphite cannot provide due to its pollutant manufacturing process. Synthetic graphite is also very expensive to produce, deriving from petroleum and costing up to ten times as much as natural graphite. Therefore manufacturers are interested in increasing the proportion of natural graphite in their products in order to lower their costs.

High-grade large flake graphite is the solution to the environmental issues these companies are facing. But there is only so much supply to go around. Recent news by Graphite Energy Corp. on February 26th [2018] showed promising exploratory results. The announcement of the commencement of drilling is a positive step forward to meeting this increased demand.

Everything from batteries to solar panels need to be made with this natural high-grade flake graphite because what is the point of powering your home with the sun or charging your car if the products themselves do more harm than good to the environment when produced. However, supply consistency remains an issue since mines have different raw material impurities which vary from mine to mine. Certain types of battery technology already require graphite to be almost 100% pure. It is very possible that the purity requirements will increase in the future.

Natural graphite is also the basis of graphene, the uses of which seem limited only by scientists’ imaginations, given the host of new applications announced daily. In a recent study by ResearchSEA, a team from the Ocean University of China and Yunnan Normal University developed a highly efficient dye-sensitized solar cell using a graphene layer. This thin layer of graphene will allow solar panels to generate electricity when it rains.

Graphite Energy Is Keeping It Green

Whether it’s the graphite for the solar panels that will power the homes of tomorrow, or the lithium ion batteries that will fuel the latest cars, these advancements need to made in an environmentally conscious way. Mining companies like Graphite Energy Corp. specialize in the production of environmentally friendly graphite. The company will be producing its supply of natural graphite with the lowest environmental footprint possible.

From Saltwater To Clean Water Using Graphite

The world’s freshwater supply is at risk of running out. In order to mitigate this global disaster, worldwide spending on desalination technology was an estimated $16.6 billion in 2016. Due to the recent intense droughts in California, the state has accelerated the construction of desalination plants. However, the operating costs and the impact on the environment due to energy requirements for the process, is hindering any real progress in the space, until now.

Jeffrey Grossman, a professor at MIT’s [Massachusetts Institute of Technology, United States] Department of Materials Science and Engineering (DMSE), has been looking into whether graphite/graphene might reduce the cost of desalination.

“A billion people around the world lack regular access to clean water, and that’s expected to more than double in the next 25 years,” Grossman says. “Desalinated water costs five to 10 times more than regular municipal water, yet we’re not investing nearly enough money into research. If we don’t have clean energy we’re in serious trouble, but if we don’t have water we die.”

Grossman’s lab has demonstrated strong results showing that new filters made from graphene could greatly improve the energy efficiency of desalination plants while potentially reducing other costs as well.

Graphite/Graphene producers like Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE) are moving quickly to provide the materials necessary to develop this new generation of desalination plants.

Potential Comparables

Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF) Cruz Cobalt Corp. is cobalt mining company involved in the identification, acquisition and exploration of mineral properties. The company’s geographical segments include the United States and Canada. They are focused on acquiring and developing high-grade Cobalt projects in politically stable, environmentally responsible and ethical mining jurisdictions, essential for the rapidly growing rechargeable battery and renewable energy.

Nemaska Lithium (TSE: NMX.TO)

Nemaska Lithium is lithium mining company. The company is a supplier of lithium hydroxide and lithium carbonate to the emerging lithium battery market that is largely driven by electric vehicles. Nemaska mining operations are located in the mining friendly jurisdiction of Quebec, Canada. Nemaska Lithium has received a notice of allowance of a main patent application on its proprietary process to produce lithium hydroxide and lithium carbonate.

Lithium Americas Corp. (TSX: LAC.TO)

Lithium Americas is developing one of North America’s largest lithium deposits in northern Nevada. It operates nearly two lithium projects namely Cauchari-Olaroz project which is located in Argentina, and the Lithium Nevada project located in Nevada. The company manufactures specialty organoclay products, derived from clays, for sale to the oil and gas and other sectors.

Teck Resources Limited (NYSE: TECK)

Teck Resources Limited is a Canadian metals and mining company.Teck’s principal products include coal, copper, zinc, with secondary products including lead, silver, gold, molybdenum, germanium, indium and cadmium. Teck’s diverse resources focuses on providing products that are essential to building a better quality of life for people around the globe.

Graphite Mining Today For A Better Tomorrow

Graphite mining will forever be intertwined with the latest advancements in science and technology. Graphite deserves attention for its various use cases in automotive, energy, aerospace and robotics industries. In order for these and other industries to become sustainable and environmentally friendly, a reliance on graphite is necessary. Therefore, this rapidly growing sector has the potential to fuel investor interest in the mining space throughout 2018. The near limitless uses of graphite has the potential to impact every facet of our lives. Companies like Graphite Energy Corp. (OTC: GRXXF); (CSE: GRE) is at the forefront in this technological revolution.

For more information on Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), please visit streetsignals.com for a free research report.

Streetsignals.com (SS) is the source of the Article and content set forth above. References to any issuer other than the profiled issuer are intended solely to identify industry participants and do not constitute an endorsement of any issuer and do not constitute a comparison to the profiled issuer. FN Media Group (FNM) is a third-party publisher and news dissemination service provider, which disseminates electronic information through multiple online media channels. FNM is NOT affiliated with SS or any company mentioned herein. The commentary, views and opinions expressed in this release by SS are solely those of SS and are not shared by and do not reflect in any manner the views or opinions of FNM. Readers of this Article and content agree that they cannot and will not seek to hold liable SS and FNM for any investment decisions by their readers or subscribers. SS and FNM and their respective affiliated companies are a news dissemination and financial marketing solutions provider and are NOT registered broker-dealers/analysts/investment advisers, hold no investment licenses and may NOT sell, offer to sell or offer to buy any security.

The Article and content related to the profiled company represent the personal and subjective views of the Author (SS), and are subject to change at any time without notice. The information provided in the Article and the content has been obtained from sources which the Author believes to be reliable. However, the Author (SS) has not independently verified or otherwise investigated all such information. None of the Author, SS, FNM, or any of their respective affiliates, guarantee the accuracy or completeness of any such information. This Article and content are not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action; readers are strongly urged to speak with their own investment advisor and review all of the profiled issuer’s filings made with the Securities and Exchange Commission before making any investment decisions and should understand the risks associated with an investment in the profiled issuer’s securities, including, but not limited to, the complete loss of your investment. FNM was not compensated by any public company mentioned herein to disseminate this press release but was compensated seventy six hundred dollars by SS, a non-affiliated third party to distribute this release on behalf of Graphite Energy Corp.

FNM HOLDS NO SHARES OF ANY COMPANY NAMED IN THIS RELEASE.

This release contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. “Forward-looking statements” describe future expectations, plans, results, or strategies and are generally preceded by words such as “may”, “future”, “plan” or “planned”, “will” or “should”, “expected,” “anticipates”, “draft”, “eventually” or “projected”. You are cautioned that such statements are subject to a multitude of risks and uncertainties that could cause future circumstances, events, or results to differ materially from those projected in the forward-looking statements, including the risks that actual results may differ materially from those projected in the forward-looking statements as a result of various factors, and other risks identified in a company’s annual report on Form 10-K or 10-KSB and other filings made by such company with the Securities and Exchange Commission. You should consider these factors in evaluating the forward-looking statements included herein, and not place undue reliance on such statements. The forward-looking statements in this release are made as of the date hereof and SS and FNM undertake no obligation to update such statements.

Media Contact:

FN Media Group, LLC
info@marketnewsupdates.com
+1(561)325-8757

SOURCE MarketNewsUpdates.com

Hopefully my insertions of ‘Canada’ and the ‘United States’ help to clarify matters. North America and the United States are not synonyms although they are sometimes used synonymously.

There is another copy of this news release on Wall Street Online (Deutschland), both in English and German.By the way, that was my first clue that there might be some German interest. The second clue was the Graphite Energy Corp. homepage. Unusually for a company with ‘headquarters’ in the Canadian province of British Columbia, there’s an option to read the text in German.

Graphite Energy Corp. seems to be a relatively new player in the ‘rush’ to mine graphite flakes for use in graphene-based applications. One of my first posts about mining for graphite flakes was a July 26, 2011 posting concerning Northern Graphite and their mining operation (Bissett Creek) in Ontario. I don’t write about them often but they are still active if their news releases are to be believed. The latest was issued February 28, 2018 and offers “financial metrics for the Preliminary Economic Assessment (the “PEA”) on the Company’s 100% owned Bissett Creek graphite project.”

The other graphite mining company mentioned here is Lomiko Metals. The latest posting here about Lomiko is a December 23, 2015 piece regarding an analysis and stock price recommendation by a company known as SeeThruEquity. Like Graphite Energy Corp., Lomiko’s mines are located in Québec and their business headquarters in British Columbia. Lomiko has a March 16, 2018 news release announcing its reinstatement for trading on the TSX (Toronto Stock Exchange),

(Vancouver, B.C.) Lomiko Metals Inc. (“Lomiko”) (“Lomiko”) (TSX-V: LMR, OTC: LMRMF, FSE: DH8C) announces it has been successful in its reinstatement application with the TSX Venture Exchange and trading will begin at the opening on Tuesday, March 20, 2018.

Getting back to the flakes, here’s more about Graphite Energy Corp.’s mine (from the About Lac Aux Bouleaux webpage),

Lac Aux Bouleaux

The Lac Aux Bouleaux Property is comprised of 14 mineral claims in one contiguous block totaling 738.12 hectares land on NTS 31J05, near the town of Mont-Laurier in southern Québec. Lac Aux Bouleaux “LAB” is a world class graphite property that borders the only producing graphite in North America [Note: There are three countries in North America, Canada, the United States, and Mexico. Québec is in Canada.]. On the property we have a full production facility already built which includes an open pit mine, processing facility, tailings pond, power and easy access to roads.

High Purity Levels

An important asset of LAB is its metallurgy. The property contains a high proportion of large and jumbo flakes from which a high purity concentrate was proven to be produced across all flakes by a simple flotation process. The concentrate can then be further purified using the province’s green and affordable hydro-electricity to be used in lithium-ion batteries.

The geological work performed in order to verify the existing data consisted of visiting approachable graphite outcrops, historical exploration and development work on the property. Large flake graphite showings located on the property were confirmed with flake size in the range of 0.5 to 2 millimeters, typically present in shear zones at the contact of gneisses and marbles where the graphite content usually ranges from 2% to 20%. The results of the property are outstanding showing to have jumbo flake natural graphite.

An onsite mill structure, a tailing dam facility, and a historical open mining pit is already present and constructed on the property. The property is ready to be put into production based on the existing infrastructure already built. The company would hope to be able to ship by rail its mined graphite directly to Teslas Gigafactory being built in Nevada [United States] which will produce 35GWh of batteries annually by 2020.

Adjacent Properties

The property is located in a very active graphite exploration and production area, adjacent to the south of TIMCAL’s Lac des Iles graphite mine in Quebec which is a world class deposit producing 25,000 tonnes of graphite annually. There are several graphite showings and past producing mines in its vicinity, including a historic deposit located on the property.

The open pit mine in operation since 1989 with an onsite plant ranked 5th in the world production of graphite. The mine is operated by TIMCAL Graphite & Carbon which is a subsidiary of Imerys S.A., a French multinational company. The mine has an average grade of 7.5% Cg (graphite carbon) and has been producing 50 different graphite products for various graphite end users around the globe.

Canadians! We have great flakes!

Tracking artificial intelligence

Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),

Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.

In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.

For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.

Getting back to the matter at hand, a December 21, 2017 Stanford University press release by Andrew Myers, which originated the news item, provides more detail about the AI index,

“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”

The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.

Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.

Baseline metrics

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.

“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.

Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.

“The AI Index will succeed only if it becomes a community effort,” Shoham said.

Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.

Intriguing findings

Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.

In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells

Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.

In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.

“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”

The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.

You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,

As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.

Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)

Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?

I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.

The devil’s (i.e., luciferase) in the bioluminescent plant

The American Chemical Society (ACS) and the Massachusetts Institute of Technology (MIT) have both issued news releases about the latest in bioluminescence.The researchers tested their work on watercress, a vegetable that was viewed in almost sacred terms in my family; it was not easily available in Vancouver (Canada) when I was child.

My father would hunt down fresh watercress by checking out the Chinese grocery stores. He could spot the fresh stuff from across the street while driving at 30 miles or more per hour. Spotting it entailed an immediate hunt for parking (my father hated to pay so we might have go around the block a few times or more) and a dash out of the car to ensure that he got his watercress before anyone else spotted it. These days it’s much more easily available and, thankfully, my father has passed on so he won’t have to think about glowing watercress.

Getting back to bioluninescent vegetable research, the American Chemical Society’s Dec. 13, 2017 news release on EurekAlert (and as a Dec. 13, 2017 news item on ScienceDaily) makes the announcement,

The 2009 film “Avatar” created a lush imaginary world, illuminated by magical, glowing plants. Now researchers are starting to bring this spellbinding vision to life to help reduce our dependence on artificial lighting. They report in ACS’ journal Nano Letters a way to infuse plants with the luminescence of fireflies.

Nature has produced many bioluminescent organisms, however, plants are not among them. Most attempts so far to create glowing greenery — decorative tobacco plants in particular — have relied on introducing the genes of luminescent bacteria or fireflies through genetic engineering. But getting all the right components to the right locations within the plants has been a challenge. To gain better control over where light-generating ingredients end up, Michael S. Strano and colleagues recently created nanoparticles that travel to specific destinations within plants. Building on this work, the researchers wanted to take the next step and develop a “nanobionic,” glowing plant.

The team infused watercress and other plants with three different nanoparticles in a pressurized bath. The nanoparticles were loaded with light-emitting luciferin; luciferase, which modifies luciferin and makes it glow; and coenzyme A, which boosts luciferase activity. Using size and surface charge to control where the sets of nanoparticles could go within the plant tissues, the researchers could optimize how much light was emitted. Their watercress was half as bright as a commercial 1 microwatt LED and 100,000 times brighter than genetically engineered tobacco plants. Also, the plant could be turned off by adding a compound that blocks luciferase from activating luciferin’s glow.

Here’s a video from MIT detailing their research,

A December 13, 2017 MIT news release (also on EurekAlert) casts more light on the topic (I couldn’t resist the word play),

Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

MIT engineers have taken a critical first step toward making that vision a reality. By embedding specialized nanoparticles into the leaves of a watercress plant, they induced the plants to give off dim light for nearly four hours. They believe that, with further optimization, such plants will one day be bright enough to illuminate a workspace.

“The vision is to make a plant that will function as a desk lamp — a lamp that you don’t have to plug in. The light is ultimately powered by the energy metabolism of the plant itself,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study

This technology could also be used to provide low-intensity indoor lighting, or to transform trees into self-powered streetlights, the researchers say.

MIT postdoc Seon-Yeong Kwak is the lead author of the study, which appears in the journal Nano Letters.

Nanobionic plants

Plant nanobionics, a new research area pioneered by Strano’s lab, aims to give plants novel features by embedding them with different types of nanoparticles. The group’s goal is to engineer plants to take over many of the functions now performed by electrical devices. The researchers have previously designed plants that can detect explosives and communicate that information to a smartphone, as well as plants that can monitor drought conditions.

Lighting, which accounts for about 20 percent of worldwide energy consumption, seemed like a logical next target. “Plants can self-repair, they have their own energy, and they are already adapted to the outdoor environment,” Strano says. “We think this is an idea whose time has come. It’s a perfect problem for plant nanobionics.”

To create their glowing plants, the MIT team turned to luciferase, the enzyme that gives fireflies their glow. Luciferase acts on a molecule called luciferin, causing it to emit light. Another molecule called co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity.

The MIT team packaged each of these three components into a different type of nanoparticle carrier. The nanoparticles, which are all made of materials that the U.S. Food and Drug Administration classifies as “generally regarded as safe,” help each component get to the right part of the plant. They also prevent the components from reaching concentrations that could be toxic to the plants.

The researchers used silica nanoparticles about 10 nanometers in diameter to carry luciferase, and they used slightly larger particles of the polymers PLGA and chitosan to carry luciferin and coenzyme A, respectively. To get the particles into plant leaves, the researchers first suspended the particles in a solution. Plants were immersed in the solution and then exposed to high pressure, allowing the particles to enter the leaves through tiny pores called stomata.

Particles releasing luciferin and coenzyme A were designed to accumulate in the extracellular space of the mesophyll, an inner layer of the leaf, while the smaller particles carrying luciferase enter the cells that make up the mesophyll. The PLGA particles gradually release luciferin, which then enters the plant cells, where luciferase performs the chemical reaction that makes luciferin glow.

The researchers’ early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours. The light generated by one 10-centimeter watercress seedling is currently about one-thousandth of the amount needed to read by, but the researchers believe they can boost the light emitted, as well as the duration of light, by further optimizing the concentration and release rates of the components.

Plant transformation

Previous efforts to create light-emitting plants have relied on genetically engineering plants to express the gene for luciferase, but this is a laborious process that yields extremely dim light. Those studies were performed on tobacco plants and Arabidopsis thaliana, which are commonly used for plant genetic studies. However, the method developed by Strano’s lab could be used on any type of plant. So far, they have demonstrated it with arugula, kale, and spinach, in addition to watercress.

For future versions of this technology, the researchers hope to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources.

“Our target is to perform one treatment when the plant is a seedling or a mature plant, and have it last for the lifetime of the plant,” Strano says. “Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes.”

The researchers have also demonstrated that they can turn the light off by adding nanoparticles carrying a luciferase inhibitor. This could enable them to eventually create plants that shut off their light emission in response to environmental conditions such as sunlight, the researchers say.

Here’s a link to and a citation for the paper,

A Nanobionic Light-Emitting Plant by Seon-Yeong Kwak, Juan Pablo Giraldo, Min Hao Wong, Volodymyr B. Koman, Tedrick Thomas Salim Lew, Jon Ell, Mark C. Weidman, Rosalie M. Sinclair, Markita P. Landry, William A. Tisdale, and Michael S. Strano. Nano Lett., 2017, 17 (12), pp 7951–7961 DOI: 10.1021/acs.nanolett.7b04369 Publication Date (Web): November 17, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

A 3D printed ‘living’ tattoo

MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells. Courtesy of the researchers [and MIT]

If that image isn’t enough, there’s also a video abstract (I don’t think I’ve seen one of these before) for the paper,

For those who’d still like to read the text, here’s more from a December 5, 2017 MIT (Massachusetts Institute of Technology) news release (also on EurekAlert),

MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells.

The cells are engineered to light up in response to a variety of stimuli. When mixed with a slurry of hydrogel and nutrients, the cells can be printed, layer by layer, to form three-dimensional, interactive structures and devices.

The team has then demonstrated its technique by printing a “living tattoo” — a thin, transparent patch patterned with live bacteria cells in the shape of a tree. Each branch of the tree is lined with cells sensitive to a different chemical or molecular compound. When the patch is adhered to skin that has been exposed to the same compounds, corresponding regions of the tree light up in response.

The researchers, led by Xuanhe Zhao, the Noyce Career Development Professor in MIT’s Department of Mechanical Engineering, and Timothy Lu, associate professor of biological engineering and of electrical engineering and computer science, say that their technique can be used to fabricate “active” materials for wearable sensors and interactive displays. Such materials can be patterned with live cells engineered to sense environmental chemicals and pollutants as well as changes in pH and temperature.

What’s more, the team developed a model to predict the interactions between cells within a given 3-D-printed structure, under a variety of conditions. The team says researchers can use the model as a guide in designing responsive living materials.

Zhao, Lu, and their colleagues have published their results today [December 5, 2017] in the journal Advanced Materials. The paper’s co-authors are graduate students Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, and postdoc Cesar de la Fuente-Nunez.

A hardy alternative

In recent years, scientists have explored a variety of responsive materials as the basis for 3D-printed inks. For instance, scientists have used inks made from temperature-sensitive polymers to print heat-responsive shape-shifting objects. Others have printed photoactivated structures from polymers that shrink and stretch in response to light.

Zhao’s team, working with bioengineers in Lu’s lab, realized that live cells might also serve as responsive materials for 3D-printed inks, particularly as they can be genetically engineered to respond to a variety of stimuli. The researchers are not the first to consider 3-D printing genetically engineered cells; others have attempted to do so using live mammalian cells, but with little success.

“It turns out those cells were dying during the printing process, because mammalian cells are basically lipid bilayer balloons,” Yuk says. “They are too weak, and they easily rupture.”

Instead, the team identified a hardier cell type in bacteria. Bacterial cells have tough cell walls that are able to survive relatively harsh conditions, such as the forces applied to ink as it is pushed through a printer’s nozzle. Furthermore, bacteria, unlike mammalian cells, are compatible with most hydrogels — gel-like materials that are made from a mix of mostly water and a bit of polymer. The group found that hydrogels can provide an aqueous environment that can support living bacteria.

The researchers carried out a screening test to identify the type of hydrogel that would best host bacterial cells. After an extensive search, a hydrogel with pluronic acid was found to be the most compatible material. The hydrogel also exhibited an ideal consistency for 3-D printing.

“This hydrogel has ideal flow characteristics for printing through a nozzle,” Zhao says. “It’s like squeezing out toothpaste. You need [the ink] to flow out of a nozzle like toothpaste, and it can maintain its shape after it’s printed.”

From tattoos to living computers

Lu provided the team with bacterial cells engineered to light up in response to a variety of chemical stimuli. The researchers then came up with a recipe for their 3-D ink, using a combination of bacteria, hydrogel, and nutrients to sustain the cells and maintain their functionality.

“We found this new ink formula works very well and can print at a high resolution of about 30 micrometers per feature,” Zhao says. “That means each line we print contains only a few cells. We can also print relatively large-scale structures, measuring several centimeters.”

They printed the ink using a custom 3-D printer that they built using standard elements combined with fixtures they machined themselves. To demonstrate the technique, the team printed a pattern of hydrogel with cells in the shape of a tree on an elastomer layer. After printing, they solidified, or cured, the patch by exposing it to ultraviolet radiation. They then adhere the transparent elastomer layer with the living patterns on it, to skin.

To test the patch, the researchers smeared several chemical compounds onto the back of a test subject’s hand, then pressed the hydrogel patch over the exposed skin. Over several hours, branches of the patch’s tree lit up when bacteria sensed their corresponding chemical stimuli.

The researchers also engineered bacteria to communicate with each other; for instance they programmed some cells to light up only when they receive a certain signal from another cell. To test this type of communication in a 3-D structure, they printed a thin sheet of hydrogel filaments with “input,” or signal-producing bacteria and chemicals, overlaid with another layer of filaments of an “output,” or signal-receiving bacteria. They found the output filaments lit up only when they overlapped and received input signals from corresponding bacteria .

Yuk says in the future, researchers may use the team’s technique to print “living computers” — structures with multiple types of cells that communicate with each other, passing signals back and forth, much like transistors on a microchip.

“This is very future work, but we expect to be able to print living computational platforms that could be wearable,” Yuk says.

For more near-term applications, the researchers are aiming to fabricate customized sensors, in the form of flexible patches and stickers that could be engineered to detect a variety of chemical and molecular compounds. They also envision their technique may be used to manufacture drug capsules and surgical implants, containing cells engineered produce compounds such as glucose, to be released therapeutically over time.

“We can use bacterial cells like workers in a 3-D factory,” Liu says. “They can be engineered to produce drugs within a 3-D scaffold, and applications should not be confined to epidermal devices. As long as the fabrication method and approach are viable, applications such as implants and ingestibles should be possible.”

Here’s a link to and a citation for the paper,

3D Printing of Living Responsive Materials and Devices by Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, Cesar de la Fuente-Nunez, Timothy K. Lu, and Xuanhe Zhao. Advanced Materials DOI: 10.1002/adma.201704821 Version of Record online: 5 DEC 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

A transatlantic report highlighting the risks and opportunities associated with synthetic biology and bioengineering

I love e-Life, the open access journal where its editors noted that a submitted synthetic biology and bioengineering report was replete with US and UK experts (along with a European or two) but no expert input from other parts of the world. In response the authors added ‘transatlantic’ to the title. It was a good decision since it was too late to add any new experts if the authors planned to have their paper published in the foreseeable future.

I’ve commented many times here when panels of experts include only Canadian, US, UK, and, sometimes, European or Commonwealth (Australia/New Zealand) experts that we need to broaden our perspectives and now I can add: or at least acknowledge (e.g. transatlantic) that the perspectives taken are reflective of a rather narrow range of countries.

Now getting to the report, here’s more from a November 21, 2017 University of Cambridge press release,

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems [emphasis mine], potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Here’s a link to and a citation for the paper,

A transatlantic perspective on 20 emerging issues in biological engineering by Bonnie C Wintle, Christian R Boehm, Catherine Rhodes, Jennifer C Molloy, Piers Millett, Laura Adam, Rainer Breitling, Rob Carlson, Rocco Casagrande, Malcolm Dando, Robert Doubleday, Eric Drexler, Brett Edwards, Tom Ellis, Nicholas G Evans, Richard Hammond, Jim Haseloff, Linda Kahl, Todd Kuiken, Benjamin R Lichman, Colette A Matthewman, Johnathan A Napier, Seán S ÓhÉigeartaigh, Nicola J Patron, Edward Perello, Philip Shapira, Joyce Tait, Eriko Takano, William J Sutherland. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

This paper is open access and the editors have included their notes to the authors and the authors’ response.

You may have noticed that I highlighted a portion of the text concerning synthetic gene drives. Coincidentally I ran across a November 16, 2017 article by Ed Yong for The Atlantic where the topic is discussed within the context of a project in New Zealand, ‘Predator Free 2050’ (Note: A link has been removed),

Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.

Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.

But perhaps, they also represent the New Zealand that could be.

In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.

In 2014, Kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.

Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat’s color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.

Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.

These excerpts don’t do justice to this thought-provoking article. If you have time, I recommend reading it in its entirety  as it provides some insight into gene drives and, with some imagination on the reader’s part, the potential for the other technologies discussed in the report.

One last comment, I notice that Eric Drexler is cited as on the report’s authors. He’s familiar to me as K. Eric Drexler, the author of the book that popularized nanotechnology in the US and other countries, Engines of Creation (1986) .

Editing the genome with CRISPR ((clustered regularly interspaced short palindromic repeats)-carrying nanoparticles

MIT (Massachusetts Institute of Technology) researchers have developed a new nonviral means of delivering CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene therapy according to a November 13, 2017 news item on Nanowerk,

In a new study, MIT researchers have developed nanoparticles that can deliver the CRISPR genome-editing system and specifically modify genes in mice. The team used nanoparticles to carry the CRISPR components, eliminating the need to use viruses for delivery.

Using the new delivery technique, the researchers were able to cut out certain genes in about 80 percent of liver cells, the best success rate ever achieved with CRISPR in adult animals.

In a new study, MIT researchers have developed nanoparticles that can deliver the CRISPR genome-editing system and specifically modify genes, eliminating the need to use viruses for delivery. Image: MIT News

A November 13, 2017 MIT news release (also on EurekAlert), which originated the news item, provides more details about the research and a good description of and comparison between using a viral system and using a nanoparticle-based system to deliver CRISPR-CAS9,

“What’s really exciting here is that we’ve shown you can make a nanoparticle that can be used to permanently and specifically edit the DNA in the liver of an adult animal,” says Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES).

One of the genes targeted in this study, known as Pcsk9, regulates cholesterol levels. Mutations in the human version of the gene are associated with a rare disorder called dominant familial hypercholesterolemia, and the FDA recently approved two antibody drugs that inhibit Pcsk9. However these antibodies need to be taken regularly, and for the rest of the patient’s life, to provide therapy. The new nanoparticles permanently edit the gene following a single treatment, and the technique also offers promise for treating other liver disorders, according to the MIT team.

Anderson is the senior author of the study, which appears in the Nov. 13 [2017] issue of Nature Biotechnology. The paper’s lead author is Koch Institute research scientist Hao Yin. Other authors include David H. Koch Institute Professor Robert Langer of MIT, professors Victor Koteliansky and Timofei Zatsepin of the Skolkovo Institute of Science and Technology [Russia], and Professor Wen Xue of the University of Massachusetts Medical School.

Targeting disease

Many scientists are trying to develop safe and efficient ways to deliver the components needed for CRISPR, which consists of a DNA-cutting enzyme called Cas9 and a short RNA that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

In most cases, researchers rely on viruses to carry the gene for Cas9, as well as the RNA guide strand. In 2014, Anderson, Yin, and their colleagues developed a nonviral delivery system in the first-ever demonstration of curing a disease (the liver disorder tyrosinemia) with CRISPR in an adult animal. However, this type of delivery requires a high-pressure injection, a method that can also cause some damage to the liver.

Later, the researchers showed they could deliver the components without the high-pressure injection by packaging messenger RNA (mRNA) encoding Cas9 into a nanoparticle instead of a virus. Using this approach, in which the guide RNA was still delivered by a virus, the researchers were able to edit the target gene in about 6 percent of hepatocytes, which is enough to treat tyrosinemia.

While that delivery technique holds promise, in some situations it would be better to have a completely nonviral delivery system, Anderson says. One consideration is that once a particular virus is used, the patient will develop antibodies to it, so it couldn’t be used again. Also, some patients have pre-existing antibodies to the viruses being tested as CRISPR delivery vehicles.

In the new Nature Biotechnology paper, the researchers came up with a system that delivers both Cas9 and the RNA guide using nanoparticles, with no need for viruses. To deliver the guide RNAs, they first had to chemically modify the RNA to protect it from enzymes in the body that would normally break it down before it could reach its destination.

The researchers analyzed the structure of the complex formed by Cas9 and the RNA guide, or sgRNA, to figure out which sections of the guide RNA strand could be chemically modified without interfering with the binding of the two molecules. Based on this analysis, they created and tested many possible combinations of modifications.

“We used the structure of the Cas9 and sgRNA complex as a guide and did tests to figure out we can modify as much as 70 percent of the guide RNA,” Yin says. “We could heavily modify it and not affect the binding of sgRNA and Cas9, and this enhanced modification really enhances activity.”

Reprogramming the liver

The researchers packaged these modified RNA guides (which they call enhanced sgRNA) into lipid nanoparticles, which they had previously used to deliver other types of RNA to the liver, and injected them into mice along with nanoparticles containing mRNA that encodes Cas9.

They experimented with knocking out a few different genes expressed by hepatocytes, but focused most of their attention on the cholesterol-regulating Pcsk9 gene. The researchers were able to eliminate this gene in more than 80 percent of liver cells, and the Pcsk9 protein was undetectable in these mice. They also found a 35 percent drop in the total cholesterol levels of the treated mice.

The researchers are now working on identifying other liver diseases that might benefit from this approach, and advancing these approaches toward use in patients.

“I think having a fully synthetic nanoparticle that can specifically turn genes off could be a powerful tool not just for Pcsk9 but for other diseases as well,” Anderson says. “The liver is a really important organ and also is a source of disease for many people. If you can reprogram the DNA of your liver while you’re still using it, we think there are many diseases that could be addressed.”

“We are very excited to see this new application of nanotechnology open new avenues for gene editing,” Langer adds.

The research was funded by the National Institutes of Health (NIH), the Russian Scientific Fund, the Skoltech Center, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Here’s a link to and a citation for the paper,

Structure-guided chemical modification of guide RNA enables potent non-viral in vivo genome editing by Hao Yin, Chun-Qing Song, Sneha Suresh, Qiongqiong Wu, Stephen Walsh, Luke Hyunsik Rhym, Esther Mintzer, Mehmet Fatih Bolukbasi, Lihua Julie Zhu, Kevin Kauffman, Haiwei Mou, Alicia Oberholzer, Junmei Ding, Suet-Yan Kwan, Roman L Bogorad, Timofei Zatsepin, Victor Koteliansky, Scot A Wolfe, Wen Xue, Robert Langer, & Daniel G Anderson. Nature Biotechnology doi:10.1038/nbt.4005 Published online: 13 November 2017

This paper is behind a paywall.

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

(Merry Christmas!) Japanese tree frogs inspire hardware for the highest of tech: a swarmalator

First, the frog,

[Japanese Tree Frog] By 池田正樹 (talk)masaki ikeda – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4593224

I wish they had a recording of the mating calls for Japanese tree frogs since they were the inspiration for mathematicians at Cornell University (New York state, US) according to a November 17, 2017 news item on ScienceDaily,

How does the Japanese tree frog figure into the latest work of noted mathematician Steven Strogatz? As it turns out, quite prominently.

“We had read about these funny frogs that hop around and croak,” said Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. “They form patterns in space and time. Usually it’s about reproduction. And based on how the other guy or guys are croaking, they don’t want to be around another one that’s croaking at the same time as they are, because they’ll jam each other.”

A November 15, 2017 Cornell University news release (also on EurekAlert but dated November 17, 2017) by Tom Fleischman, which originated the news item, details how the calls led to ‘swarmalators’ (Note: Links have been removed),

Strogatz and Kevin O’Keeffe, Ph.D. ’17, used the curious mating ritual of male Japanese tree frogs as inspiration for their exploration of “swarmalators” – their term for systems in which both synchronization and swarming occur together.

Specifically, they considered oscillators whose phase dynamics and spatial dynamics are coupled. In the instance of the male tree frogs, they attempt to croak in exact anti-phase (one croaks while the other is silent) while moving away from a rival so as to be heard by females.

This opens up “a new class of math problems,” said Strogatz, a Stephen H. Weiss Presidential Fellow. “The question is, what do we expect to see when people start building systems like this or observing them in biology?”

Their paper, “Oscillators That Sync and Swarm,” was published Nov. 13 [2017] in Nature Communications. Strogatz and O’Keeffe – now a postdoctoral researcher with the Senseable City Lab at the Massachusetts Institute of Technology – collaborated with Hyunsuk Hong from Chonbuk National University in Jeonju, South Korea.

Swarming and synchronization both involve large, self-organizing groups of individuals interacting according to simple rules, but rarely have they been studied together, O’Keeffe said.

“No one had connected these two areas, in spite of the fact that there were all these parallels,” he said. “That was the theoretical idea that sort of seduced us, I suppose. And there were also a couple of concrete examples, which we liked – including the tree frogs.”

Studies of swarms focus on how animals move – think of birds flocking or fish schooling – while neglecting the dynamics of their internal states. Studies of synchronization do the opposite: They focus on oscillators’ internal dynamics. Strogatz long has been fascinated by fireflies’ synchrony and other similar phenomena, giving a TED Talk on the topic in 2004, but not on their motion.

“[Swarming and synchronization] are so similar, and yet they were never connected together, and it seems so obvious,” O’Keeffe said. “It’s a whole new landscape of possible behaviors that hadn’t been explored before.”

Using a pair of governing equations that assume swarmalators are free to move about, along with numerical simulations, the group found that a swarmalator system settles into one of five states:

  • Static synchrony – featuring circular symmetry, crystal-like distribution, fully synchronized in phase;
  • Static asynchrony – featuring uniform distribution, meaning that every phase occurs everywhere;
  • Static phase wave – swarmalators settle near others in a phase similar to their own, and phases are frozen at their initial values;
  • Splintered phase wave – nonstationary, disconnected clusters of distinct phases; and
  • Active phase wave – similar to bidirectional states found in biological swarms, where populations split into counter-rotating subgroups; also similar to vortex arrays formed by groups of sperm.

Through the study of simple models, the group found that the coupling of “sync” and “swarm” leads to rich patterns in both time and space, and could lead to further study of systems that exhibit this dual behavior.

“This opens up a lot of questions for many parts of science – there are a lot of things to try that people hadn’t thought of trying,” Strogatz said. “It’s science that opens doors for science. It’s inaugurating science, rather than culminating science.”

Here’s a link to and a citation for the paper,

Oscillators that sync and swarm by Kevin P. O’Keeffe, Hyunsuk Hong, & Steven H. Strogatz. Nature Communications 8, Article number: 1504 (2017) doi:10.1038/s41467-017-01190-3 Published online: 15 November 2017

This paper is open access.

One last thing, these frogs have also inspired WiFi improvements (from the Japanese tree frog Wikipedia entry; Note: Links have been removed),

Journalist Toyohiro Akiyama carried some Japanese tree frogs with him during his trip to the Mir space station in December 1990.[citation needed] Calling behavior of the species was used to create an algorithm for optimizing Wi-Fi networks.[3]

While it’s not clear in the Wikipedia entry, the frogs were part of an experiment. Here’s a link to and a citation for the paper about the experiment, along with an abstract,

The Frog in Space (FRIS) experiment onboard Space Station Mir: final report and follow-on studies by Yamashita, M.; Izumi-Kurotani, A.; Mogami, Y.; Okuno,k M.; Naitoh, T.; Wassersug, R. J. Biol Sci Space. 1997 Dec 11(4):313-20.

Abstract

The “Frog in Space” (FRIS) experiment marked a major step for Japanese space life science, on the occasion of the first space flight of a Japanese cosmonaut. At the core of FRIS were six Japanese tree frogs, Hyla japonica, flown on Space Station Mir for 8 days in 1990. The behavior of these frogs was observed and recorded under microgravity. The frogs took up a “parachuting” posture when drifting in a free volume on Mir. When perched on surfaces, they typically sat with their heads bent backward. Such a peculiar posture, after long exposure to microgravity, is discussed in light of motion sickness in amphibians. Histological examinations and other studies were made on the specimens upon recovery. Some organs, such as the liver and the vertebra, showed changes as a result of space flight; others were unaffected. Studies that followed FRIS have been conducted to prepare for a second FRIS on the International Space Station. Interspecific diversity in the behavioral reactions of anurans to changes in acceleration is the major focus of these investigations. The ultimate goal of this research is to better understand how organisms have adapted to gravity through their evolution on earth.

The paper is open access.

Machine learning software and quantum computers that think

A Sept. 14, 2017 news item on phys.org sets the stage for quantum machine learning by explaining a few basics first,

Language acquisition in young children is apparently connected with their ability to detect patterns. In their learning process, they search for patterns in the data set that help them identify and optimize grammar structures in order to properly acquire the language. Likewise, online translators use algorithms through machine learning techniques to optimize their translation engines to produce well-rounded and understandable outcomes. Even though many translations did not make much sense at all at the beginning, in these past years we have been able to see major improvements thanks to machine learning.

Machine learning techniques use mathematical algorithms and tools to search for patterns in data. These techniques have become powerful tools for many different applications, which can range from biomedical uses such as in cancer reconnaissance, in genetics and genomics, in autism monitoring and diagnosis and even plastic surgery, to pure applied physics, for studying the nature of materials, matter or even complex quantum systems.

Capable of adapting and changing when exposed to a new set of data, machine learning can identify patterns, often outperforming humans in accuracy. Although machine learning is a powerful tool, certain application domains remain out of reach due to complexity or other aspects that rule out the use of the predictions that learning algorithms provide.

Thus, in recent years, quantum machine learning has become a matter of interest because of is vast potential as a possible solution to these unresolvable challenges and quantum computers show to be the right tool for its solution.

A Sept. 14, 2017 Institute of Photonic Sciences ([Catalan] Institut de Ciències Fotòniques] ICFO) press release, which originated the news item, goes on to detail a recently published overview of the state of quantum machine learning,

In a recent study, published in Nature, an international team of researchers integrated by Jacob Biamonte from Skoltech/IQC, Peter Wittek from ICFO, Nicola Pancotti from MPQ, Patrick Rebentrost from MIT, Nathan Wiebe from Microsoft Research, and Seth Lloyd from MIT have reviewed the actual status of classical machine learning and quantum machine learning. In their review, they have thoroughly addressed different scenarios dealing with classical and quantum machine learning. In their study, they have considered different possible combinations: the conventional method of using classical machine learning to analyse classical data, using quantum machine learning to analyse both classical and quantum data, and finally, using classical machine learning to analyse quantum data.

Firstly, they set out to give an in-depth view of the status of current supervised and unsupervised learning protocols in classical machine learning by stating all applied methods. They introduce quantum machine learning and provide an extensive approach on how this technique could be used to analyse both classical and quantum data, emphasizing that quantum machines could accelerate processing timescales thanks to the use of quantum annealers and universal quantum computers. Quantum annealing technology has better scalability, but more limited use cases. For instance, the latest iteration of D-Wave’s [emphasis mine] superconducting chip integrates two thousand qubits, and it is used for solving certain hard optimization problems and for efficient sampling. On the other hand, universal (also called gate-based) quantum computers are harder to scale up, but they are able to perform arbitrary unitary operations on qubits by sequences of quantum logic gates. This resembles how digital computers can perform arbitrary logical operations on classical bits.

However, they address the fact that controlling a quantum system is very complex and analyzing classical data with quantum resources is not as straightforward as one may think, mainly due to the challenge of building quantum interface devices that allow classical information to be encoded into a quantum mechanical form. Difficulties, such as the “input” or “output” problems appear to be the major technical challenge that needs to be overcome.

The ultimate goal is to find the most optimized method that is able to read, comprehend and obtain the best outcomes of a data set, be it classical or quantum. Quantum machine learning is definitely aimed at revolutionizing the field of computer sciences, not only because it will be able to control quantum computers, speed up the information processing rates far beyond current classical velocities, but also because it is capable of carrying out innovative functions, such quantum deep learning, that could not only recognize counter-intuitive patterns in data, invisible to both classical machine learning and to the human eye, but also reproduce them.

As Peter Wittek [emphasis mine] finally states, “Writing this paper was quite a challenge: we had a committee of six co-authors with different ideas about what the field is, where it is now, and where it is going. We rewrote the paper from scratch three times. The final version could not have been completed without the dedication of our editor, to whom we are indebted.”

It was a bit of a surprise to see local (Vancouver, Canada) company D-Wave Systems mentioned but i notice that one of the paper’s authors (Peter Wittek) is mentioned in a May 22, 2017 D-Wave news release announcing a new partnership to foster quantum machine learning,

Today [May 22, 2017] D-Wave Systems Inc., the leader in quantum computing systems and software, announced a new initiative with the Creative Destruction Lab (CDL) at the University of Toronto’s Rotman School of Management. D-Wave will work with CDL, as a CDL Partner, to create a new track to foster startups focused on quantum machine learning. The new track will complement CDL’s successful existing track in machine learning. Applicants selected for the intensive one-year program will go through an introductory boot camp led by Dr. Peter Wittek [emphasis mine], author of Quantum Machine Learning: What Quantum Computing means to Data Mining, with instruction and technical support from D-Wave experts, access to a D-Wave 2000Q™ quantum computer, and the opportunity to use a D-Wave sampling service to enable machine learning computations and applications. D-Wave staff will be a part of the committee selecting up to 40 individuals for the program, which begins in September 2017.

For anyone interested in the paper, here’s a link to and a citation,

Quantum machine learning by Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, & Seth Lloyd. Nature 549, 195–202 (14 September 2017) doi:10.1038/nature23474 Published online 13 September 2017

This paper is behind a paywall.

Robots in Vancouver and in Canada (two of two)

This is the second of a two-part posting about robots in Vancouver and Canada. The first part included a definition, a brief mention a robot ethics quandary, and sexbots. This part is all about the future. (Part one is here.)

Canadian Robotics Strategy

Meetings were held Sept. 28 – 29, 2017 in, surprisingly, Vancouver. (For those who don’t know, this is surprising because most of the robotics and AI research seems to be concentrated in eastern Canada. if you don’t believe me take a look at the speaker list for Day 2 or the ‘Canadian Stakeholder’ meeting day.) From the NSERC (Natural Sciences and Engineering Research Council) events page of the Canadian Robotics Network,

Join us as we gather robotics stakeholders from across the country to initiate the development of a national robotics strategy for Canada. Sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC), this two-day event coincides with the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) in order to leverage the experience of international experts as we explore Canada’s need for a national robotics strategy.

Where
Vancouver, BC, Canada

When
Thursday September 28 & Friday September 29, 2017 — Save the date!

Download the full agenda and speakers’ list here.

Objectives

The purpose of this two-day event is to gather members of the robotics ecosystem from across Canada to initiate the development of a national robotics strategy that builds on our strengths and capacities in robotics, and is uniquely tailored to address Canada’s economic needs and social values.

This event has been sponsored by the Natural Sciences and Engineering Research Council of Canada (NSERC) and is supported in kind by the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2017) as an official Workshop of the conference.  The first of two days coincides with IROS 2017 – one of the premiere robotics conferences globally – in order to leverage the experience of international robotics experts as we explore Canada’s need for a national robotics strategy here at home.

Who should attend

Representatives from industry, research, government, startups, investment, education, policy, law, and ethics who are passionate about building a robust and world-class ecosystem for robotics in Canada.

Program Overview

Download the full agenda and speakers’ list here.

DAY ONE: IROS Workshop 

“Best practices in designing effective roadmaps for robotics innovation”

Thursday September 28, 2017 | 8:30am – 5:00pm | Vancouver Convention Centre

Morning Program:“Developing robotics innovation policy and establishing key performance indicators that are relevant to your region” Leading international experts share their experience designing robotics strategies and policy frameworks in their regions and explore international best practices. Opening Remarks by Prof. Hong Zhang, IROS 2017 Conference Chair.

Afternoon Program: “Understanding the Canadian robotics ecosystem” Canadian stakeholders from research, industry, investment, ethics and law provide a collective overview of the Canadian robotics ecosystem. Opening Remarks by Ryan Gariepy, CTO of Clearpath Robotics.

Thursday Evening Program: Sponsored by Clearpath Robotics  Workshop participants gather at a nearby restaurant to network and socialize.

Learn more about the IROS Workshop.

DAY TWO: NSERC-Sponsored Canadian Robotics Stakeholder Meeting
“Towards a national robotics strategy for Canada”

Friday September 29, 2017 | 8:30am – 5:00pm | University of British Columbia (UBC)

On the second day of the program, robotics stakeholders from across the country gather at UBC for a full day brainstorming session to identify Canada’s unique strengths and opportunities relative to the global competition, and to align on a strategic vision for robotics in Canada.

Friday Evening Program: Sponsored by NSERC Meeting participants gather at a nearby restaurant for the event’s closing dinner reception.

Learn more about the Canadian Robotics Stakeholder Meeting.

I was glad to see in the agenda that some of the international speakers represented research efforts from outside the usual Europe/US axis.

I have been in touch with one of the organizers (also mentioned in part one with regard to robot ethics), Ajung Moon (her website is here), who says that there will be a white paper available on the Canadian Robotics Network website at some point in the future. I’ll keep looking for it and, in the meantime, I wonder what the 2018 Canadian federal budget will offer robotics.

Robots and popular culture

For anyone living in Canada or the US, Westworld (television series) is probably the most recent and well known ‘robot’ drama to premiere in the last year.As for movies, I think Ex Machina from 2014 probably qualifies in that category. Interestingly, both Westworld and Ex Machina seem quite concerned with sex with Westworld adding significant doses of violence as another  concern.

I am going to focus on another robot story, the 2012 movie, Robot & Frank, which features a care robot and an older man,

Frank (played by Frank Langella), a former jewel thief, teaches a robot the skills necessary to rob some neighbours of their valuables. The ethical issue broached in the film isn’t whether or not the robot should learn the skills and assist Frank in his thieving ways although that’s touched on when Frank keeps pointing out that planning his heist requires he live more healthily. No, the problem arises afterward when the neighbour accuses Frank of the robbery and Frank removes what he believes is all the evidence. He believes he’s going successfully evade arrest until the robot notes that Frank will have to erase its memory in order to remove all of the evidence. The film ends without the robot’s fate being made explicit.

In a way, I find the ethics query (was the robot Frank’s friend or just a machine?) posed in the film more interesting than the one in Vikander’s story, an issue which does have a history. For example, care aides, nurses, and/or servants would have dealt with requests to give an alcoholic patient a drink. Wouldn’t there  already be established guidelines and practices which could be adapted for robots? Or, is this question made anew by something intrinsically different about robots?

To be clear, Vikander’s story is a good introduction and starting point for these kinds of discussions as is Moon’s ethical question. But they are starting points and I hope one day there’ll be a more extended discussion of the questions raised by Moon and noted in Vikander’s article (a two- or three-part series of articles? public discussions?).

How will humans react to robots?

Earlier there was the contention that intimate interactions with robots and sexbots would decrease empathy and the ability of human beings to interact with each other in caring ways. This sounds a bit like the argument about smartphones/cell phones and teenagers who don’t relate well to others in real life because most of their interactions are mediated through a screen, which many seem to prefer. It may be partially true but, arguably,, books too are an antisocial technology as noted in Walter J. Ong’s  influential 1982 book, ‘Orality and Literacy’,  (from the Walter J. Ong Wikipedia entry),

A major concern of Ong’s works is the impact that the shift from orality to literacy has had on culture and education. Writing is a technology like other technologies (fire, the steam engine, etc.) that, when introduced to a “primary oral culture” (which has never known writing) has extremely wide-ranging impacts in all areas of life. These include culture, economics, politics, art, and more. Furthermore, even a small amount of education in writing transforms people’s mentality from the holistic immersion of orality to interiorization and individuation. [emphases mine]

So, robotics and artificial intelligence would not be the first technologies to affect our brains and our social interactions.

There’s another area where human-robot interaction may have unintended personal consequences according to April Glaser’s Sept. 14, 2017 article on Slate.com (Note: Links have been removed),

The customer service industry is teeming with robots. From automated phone trees to touchscreens, software and machines answer customer questions, complete orders, send friendly reminders, and even handle money. For an industry that is, at its core, about human interaction, it’s increasingly being driven to a large extent by nonhuman automation.

But despite the dreams of science-fiction writers, few people enter a customer-service encounter hoping to talk to a robot. And when the robot malfunctions, as they so often do, it’s a human who is left to calm angry customers. It’s understandable that after navigating a string of automated phone menus and being put on hold for 20 minutes, a customer might take her frustration out on a customer service representative. Even if you know it’s not the customer service agent’s fault, there’s really no one else to get mad at. It’s not like a robot cares if you’re angry.

When human beings need help with something, says Madeleine Elish, an anthropologist and researcher at the Data and Society Institute who studies how humans interact with machines, they’re not only looking for the most efficient solution to a problem. They’re often looking for a kind of validation that a robot can’t give. “Usually you don’t just want the answer,” Elish explained. “You want sympathy, understanding, and to be heard”—none of which are things robots are particularly good at delivering. In a 2015 survey of over 1,300 people conducted by researchers at Boston University, over 90 percent of respondents said they start their customer service interaction hoping to speak to a real person, and 83 percent admitted that in their last customer service call they trotted through phone menus only to make their way to a human on the line at the end.

“People can get so angry that they have to go through all those automated messages,” said Brian Gnerer, a call center representative with AT&T in Bloomington, Minnesota. “They’ve been misrouted or been on hold forever or they pressed one, then two, then zero to speak to somebody, and they are not getting where they want.” And when people do finally get a human on the phone, “they just sigh and are like, ‘Thank God, finally there’s somebody I can speak to.’ ”

Even if robots don’t always make customers happy, more and more companies are making the leap to bring in machines to take over jobs that used to specifically necessitate human interaction. McDonald’s and Wendy’s both reportedly plan to add touchscreen self-ordering machines to restaurants this year. Facebook is saturated with thousands of customer service chatbots that can do anything from hail an Uber, retrieve movie times, to order flowers for loved ones. And of course, corporations prefer automated labor. As Andy Puzder, CEO of the fast-food chains Carl’s Jr. and Hardee’s and former Trump pick for labor secretary, bluntly put it in an interview with Business Insider last year, robots are “always polite, they always upsell, they never take a vacation, they never show up late, there’s never a slip-and-fall, or an age, sex, or race discrimination case.”

But those robots are backstopped by human beings. How does interacting with more automated technology affect the way we treat each other? …

“We know that people treat artificial entities like they’re alive, even when they’re aware of their inanimacy,” writes Kate Darling, a researcher at MIT who studies ethical relationships between humans and robots, in a recent paper on anthropomorphism in human-robot interaction. Sure, robots don’t have feelings and don’t feel pain (not yet, anyway). But as more robots rely on interaction that resembles human interaction, like voice assistants, the way we treat those machines will increasingly bleed into the way we treat each other.

It took me a while to realize that what Glaser is talking about are AI systems and not robots as such. (sigh) It’s so easy to conflate the concepts.

AI ethics (Toby Walsh and Suzanne Gildert)

Jack Stilgoe of the Guardian published a brief Oct. 9, 2017 introduction to his more substantive (30 mins.?) podcast interview with Dr. Toby Walsh where they discuss stupid AI amongst other topics (Note: A link has been removed),

Professor Toby Walsh has recently published a book – Android Dreams – giving a researcher’s perspective on the uncertainties and opportunities of artificial intelligence. Here, he explains to Jack Stilgoe that we should worry more about the short-term risks of stupid AI in self-driving cars and smartphones than the speculative risks of super-intelligence.

Professor Walsh discusses the effects that AI could have on our jobs, the shapes of our cities and our understandings of ourselves. As someone developing AI, he questions the hype surrounding the technology. He is scared by some drivers’ real-world experimentation with their not-quite-self-driving Teslas. And he thinks that Siri needs to start owning up to being a computer.

I found this discussion to cast a decidedly different light on the future of robotics and AI. Walsh is much more interested in discussing immediate issues like the problems posed by ‘self-driving’ cars. (Aside: Should we be calling them robot cars?)

One ethical issue Walsh raises is with data regarding accidents. He compares what’s happening with accident data from self-driving (robot) cars to how the aviation industry handles accidents. Hint: accident data involving air planes is shared. Would you like to guess who does not share their data?

Sharing and analyzing data and developing new safety techniques based on that data has made flying a remarkably safe transportation technology.. Walsh argues the same could be done for self-driving cars if companies like Tesla took the attitude that safety is in everyone’s best interests and shared their accident data in a scheme similar to the aviation industry’s.

In an Oct. 12, 2017 article by Matthew Braga for Canadian Broadcasting Corporation (CBC) news online another ethical issue is raised by Suzanne Gildert (a participant in the Canadian Robotics Roadmap/Strategy meetings mentioned earlier here), Note: Links have been removed,

… Suzanne Gildert, the co-founder and chief science officer of Vancouver-based robotics company Kindred. Since 2014, her company has been developing intelligent robots [emphasis mine] that can be taught by humans to perform automated tasks — for example, handling and sorting products in a warehouse.

The idea is that when one of Kindred’s robots encounters a scenario it can’t handle, a human pilot can take control. The human can see, feel and hear the same things the robot does, and the robot can learn from how the human pilot handles the problematic task.

This process, called teleoperation, is one way to fast-track learning by manually showing the robot examples of what its trainers want it to do. But it also poses a potential moral and ethical quandary that will only grow more serious as robots become more intelligent.

“That AI is also learning my values,” Gildert explained during a talk on robot ethics at the Singularity University Canada Summit in Toronto on Wednesday [Oct. 11, 2017]. “Everything — my mannerisms, my behaviours — is all going into the AI.”

At its worst, everything from algorithms used in the U.S. to sentence criminals to image-recognition software has been found to inherit the racist and sexist biases of the data on which it was trained.

But just as bad habits can be learned, good habits can be learned too. The question is, if you’re building a warehouse robot like Kindred is, is it more effective to train those robots’ algorithms to reflect the personalities and behaviours of the humans who will be working alongside it? Or do you try to blend all the data from all the humans who might eventually train Kindred robots around the world into something that reflects the best strengths of all?

I notice Gildert distinguishes her robots as “intelligent robots” and then focuses on AI and issues with bias which have already arisen with regard to algorithms (see my May 24, 2017 posting about bias in machine learning, AI, and .Note: if you’re in Vancouver on Oct. 26, 2017 and interested in algorithms and bias), there’s a talk being given by Dr. Cathy O’Neil, author the Weapons of Math Destruction, on the topic of Gender and Bias in Algorithms. It’s not free but  tickets are here.)

Final comments

There is one more aspect I want to mention. Even as someone who usually deals with nanobots, it’s easy to start discussing robots as if the humanoid ones are the only ones that exist. To recapitulate, there are humanoid robots, utilitarian robots, intelligent robots, AI, nanobots, ‘microscopic bots, and more all of which raise questions about ethics and social impacts.

However, there is one more category I want to add to this list: cyborgs. They live amongst us now. Anyone who’s had a hip or knee replacement or a pacemaker or a deep brain stimulator or other such implanted device qualifies as a cyborg. Increasingly too, prosthetics are being introduced and made part of the body. My April 24, 2017 posting features this story,

This Case Western Reserve University (CRWU) video accompanies a March 28, 2017 CRWU news release, (h/t ScienceDaily March 28, 2017 news item)

Bill Kochevar grabbed a mug of water, drew it to his lips and drank through the straw.

His motions were slow and deliberate, but then Kochevar hadn’t moved his right arm or hand for eight years.

And it took some practice to reach and grasp just by thinking about it.

Kochevar, who was paralyzed below his shoulders in a bicycling accident, is believed to be the first person with quadriplegia in the world to have arm and hand movements restored with the help of two temporarily implanted technologies. [emphasis mine]

A brain-computer interface with recording electrodes under his skull, and a functional electrical stimulation (FES) system* activating his arm and hand, reconnect his brain to paralyzed muscles.

Does a brain-computer interface have an effect on human brain and, if so, what might that be?

In any discussion (assuming there is funding for it) about ethics and social impact, we might want to invite the broadest range of people possible at an ‘earlyish’ stage (although we’re already pretty far down the ‘automation road’) stage or as Jack Stilgoe and Toby Walsh note, technological determinism holds sway.

Once again here are links for the articles and information mentioned in this double posting,

That’s it!

ETA Oct. 16, 2017: Well, I guess that wasn’t quite ‘it’. BBC’s (British Broadcasting Corporation) Magazine published a thoughtful Oct. 15, 2017 piece titled: Can we teach robots ethics?