Tag Archives: MIT

Santiago Ramón y Cajal and the butterflies of the soul

The Cajal exhibit of drawings was here in Vancouver (Canada) this last fall (2017) and I still carry the memory of that glorious experience (see my Sept. 11, 2017 posting for more about the show and associated events). It seems Cajal’s drawings had a similar response in New York city, from a January 18, 2018 article by Roberta Smith for the New York Times,

It’s not often that you look at an exhibition with the help of the very apparatus that is its subject. But so it is with “The Beautiful Brain: The Drawings of Santiago Ramón y Cajal” at the Grey Art Gallery at New York University, one of the most unusual, ravishing exhibitions of the season.

The show finished its run on March 31, 2018 and is now on its way to the Massachusetts Institute of Technology (MIT) in Boston, Massachusetts for its opening on May 3, 2018. It looks like they have an exciting lineup of events to go along with the exhibit (from MIT’s The Beautiful Brain: The Drawings of Santiago Ramón y Cajal exhibit and event page),

SUMMER PROGRAMS

ONGOING

Spotlight Tours
Explorations led by local and Spanish scientists, artists, and entrepreneurs who will share their unique perspectives on particular aspects of the exhibition. (2:00 pm on select Tuesdays and Saturdays)

Tue, May 8 – Mark Harnett, Fred and Carole Middleton Career Development Professor at MIT and McGovern Institute Investigator Sat, May 26 – Marion Boulicault, MIT Graduate Student and Neuroethics Fellow in the Center for Sensorimotor Neural Engineering Tue, June 5 – Kelsey Allen, Graduate researcher, MIT Center for Brains, Minds, and Machines Sat, Jun 23 – Francisco Martin-Martinez, Research Scientist in MIT’s Laboratory for Atomistic & Molecular Mechanics and President of the Spanish Foundation for Science and Technology Jul 21 – Alex Gomez-Marin, Principal Investigator of the Behavior of Organisms Laboratory in the Instituto de Neurociencias, Spain Tue, Jul 31– Julie Pryor, Director of Communications at the McGovern Institute for Brain Research at MIT Tue, Aug 28 – Satrajit Ghosh, Principal Research Scientist at the McGovern Institute for Brain Research at MIT, Assistant Professor in the Department of Otolaryngology at Harvard Medical School, and faculty member in the Speech and Hearing Biosciences and Technology program in the Harvard Division of Medical Sciences

Idea Hub
Drop in and explore expansion microscopy in our maker-space.

Visualizing Science Workshop
Experiential learning with micro-scale biological images. (pre-registration required)

Gallery Demonstrations
Researchers share the latest on neural anatomy, signal transmission, and modern imaging techniques.

EVENTS

Teen Science Café: Mindful Matters
MIT researchers studying the brain share their mind-blowing findings.

Neuron Paint Night
Create a painting of cerebral cortex neurons and learn about the EyeWire citizen science game.

Cerebral Cinema Series
Hear from researchers and then compare real science to depictions on the big screen.

Brainy Trivia
Test your brain power in a night of science trivia and short, snappy research talks.

Come back to see our exciting lineup for the fall!

If you don’t have a chance to see the show or if you’d like a preview, I encourage you to read Smith’s article as it has embedded several Cajal drawings and rendered them exceptionally well.

For those who like a little contemporary (and related) science with their art, there’s a March 30, 2018 Harvard Medical Schoo (HMS)l news release by Kevin Jang (also on EurekAlert), Note: All links save one have been removed,

Drawing of the cells of the chick cerebellum by Santiago Ramón y Cajal, from “Estructura de los centros nerviosos de las aves,” Madrid, circa 1905

 

Modern neuroscience, for all its complexity, can trace its roots directly to a series of pen-and-paper sketches rendered by Nobel laureate Santiago Ramón y Cajal in the late 19th and early 20th centuries.

His observations and drawings exposed the previously hidden composition of the brain, revealing neuronal cell bodies and delicate projections that connect individual neurons together into intricate networks.

As he explored the nervous systems of various organisms under his microscope, a natural question arose: What makes a human brain different from the brain of any other species?

At least part of the answer, Ramón y Cajal hypothesized, lay in a specific class of neuron—one found in a dazzling variety of shapes and patterns of connectivity, and present in higher proportions in the human brain than in the brains of other species. He dubbed them the “butterflies of the soul.”

Known as interneurons, these cells play critical roles in transmitting information between sensory and motor neurons, and, when defective, have been linked to diseases such as schizophrenia, autism and intellectual disability.

Despite more than a century of study, however, it remains unclear why interneurons are so diverse and what specific functions the different subtypes carry out.

Now, in a study published in the March 22 [2018] issue of Nature, researchers from Harvard Medical School, New York Genome Center, New York University and the Broad Institute of MIT and Harvard have detailed for the first time how interneurons emerge and diversify in the brain.

Using single-cell analysis—a technology that allows scientists to track cellular behavior one cell at a time—the team traced the lineage of interneurons from their earliest precursor states to their mature forms in mice. The researchers identified key genetic programs that determine the fate of developing interneurons, as well as when these programs are switched on or off.

The findings serve as a guide for efforts to shed light on interneuron function and may help inform new treatment strategies for disorders involving their dysfunction, the authors said.

“We knew more than 100 years ago that this huge diversity of morphologically interesting cells existed in the brain, but their specific individual roles in brain function are still largely unclear,” said co-senior author Gordon Fishell, HMS professor of neurobiology and a faculty member at the Stanley Center for Psychiatric Research at the Broad.

“Our study provides a road map for understanding how and when distinct interneuron subtypes develop, giving us unprecedented insight into the biology of these cells,” he said. “We can now investigate interneuron properties as they emerge, unlock how these important cells function and perhaps even intervene when they fail to develop correctly in neuropsychiatric disease.”

A hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative CommonsA hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative Commons

Origins and Fates

In collaboration with co-senior author Rahul Satija, core faculty member of the New York Genome Center, Fishell and colleagues analyzed brain regions in developing mice known to contain precursor cells that give rise to interneurons.

Using Drop-seq, a single-cell sequencing technique created by researchers at HMS and the Broad, the team profiled gene expression in thousands of individual cells at multiple time points.

This approach overcomes a major limitation in past research, which could analyze only the average activity of mixtures of many different cells.

In the current study, the team found that the precursor state of all interneurons had similar gene expression patterns despite originating in three separate brain regions and giving rise to 14 or more interneuron subtypes alone—a number still under debate as researchers learn more about these cells.

“Mature interneuron subtypes exhibit incredible diversity. Their morphology and patterns of connectivity and activity are so different from each other, but our results show that the first steps in their maturation are remarkably similar,” said Satija, who is also an assistant professor of biology at New York University.

“They share a common developmental trajectory at the earliest stages, but the seeds of what will cause them to diverge later—a handful of genes—are present from the beginning,” Satija said.

As they profiled cells at later stages in development, the team observed the initial emergence of four interneuron “cardinal” classes, which give rise to distinct fates. Cells were committed to these fates even in the early embryo. By developing a novel computational strategy to link precursors with adult subtypes, the researchers identified individual genes that were switched on and off when cells began to diversify.

For example, they found that the gene Mef2c—mutations of which are linked to Alzheimer’s disease, schizophrenia and neurodevelopmental disorders in humans—is an early embryonic marker for a specific interneuron subtype known as Pvalb neurons. When they deleted Mef2c in animal models, Pvalb neurons failed to develop.

These early genes likely orchestrate the execution of subsequent genetic subroutines, such as ones that guide interneuron subtypes as they migrate to different locations in the brain and ones that help form unique connection patterns with other neural cell types, the authors said.

The identification of these genes and their temporal activity now provide researchers with specific targets to investigate the precise functions of interneurons, as well as how neurons diversify in general, according to the authors.

“One of the goals of this project was to address an incredibly fascinating developmental biology question, which is how individual progenitor cells decide between different neuronal fates,” Satija said. “In addition to these early markers of interneuron divergence, we found numerous additional genes that increase in expression, many dramatically, at later time points.”

The association of some of these genes with neuropsychiatric diseases promises to provide a better understanding of these disorders and the development of therapeutic strategies to treat them, a particularly important notion given the paucity of new treatments, the authors said.

Over the past 50 years, there have been no fundamentally new classes of neuropsychiatric drugs, only newer versions of old drugs, the researchers pointed out.

“Our repertoire is no better than it was in the 1970s,” Fishell said.

“Neuropsychiatric diseases likely reflect the dysfunction of very specific cell types. Our study puts forward a clear picture of what cells to look at as we work to shed light on the mechanisms that underlie these disorders,” Fishell said. “What we will find remains to be seen, but we have new, strong hypotheses that we can now test.”

As a resource for the research community, the study data and software are open-source and freely accessible online.

A gallery of the drawings of Santiago Ramón y Cajal is currently on display in New York City, and will open at the MIT Museum in Boston in May 2018.

Christian Mayer, Christoph Hafemeister and Rachel Bandler served as co-lead authors on the study.

This work was supported by the National Institutes of Health (R01 NS074972, R01 NS081297, MH071679-12, DP2-HG-009623, F30MH114462, T32GM007308, F31NS103398), the European Molecular Biology Organization, the National Science Foundation and the Simons Foundation.

Here’s link to and a citation for the paper,

Developmental diversification of cortical inhibitory interneurons by Christian Mayer, Christoph Hafemeister, Rachel C. Bandler, Robert Machold, Renata Batista Brito, Xavier Jaglin, Kathryn Allaway, Andrew Butler, Gord Fishell, & Rahul Satija. Nature volume 555, pages 457–462 (22 March 2018) doi:10.1038/nature25999 Published: 05 March 2018

This paper is behind a paywall.

New path to viable memristor/neuristor?

I first stumbled onto memristors and the possibility of brain-like computing sometime in 2008 (around the time that R. Stanley Williams and his team at HP Labs first published the results of their research linking Dr. Leon Chua’s memristor theory to their attempts to shrink computer chips). In the almost 10 years since, scientists have worked hard to utilize memristors in the field of neuromorphic (brain-like) engineering/computing.

A January 22, 2018 news item on phys.org describes the latest work,

When it comes to processing power, the human brain just can’t be beat.

Packed within the squishy, football-sized organ are somewhere around 100 billion neurons. At any given moment, a single neuron can relay instructions to thousands of other neurons via synapses—the spaces between neurons, across which neurotransmitters are exchanged. There are more than 100 trillion synapses that mediate neuron signaling in the brain, strengthening some connections while pruning others, in a process that enables the brain to recognize patterns, remember facts, and carry out other learning tasks, at lightning speeds.

Researchers in the emerging field of “neuromorphic computing” have attempted to design computer chips that work like the human brain. Instead of carrying out computations based on binary, on/off signaling, like digital chips do today, the elements of a “brain on a chip” would work in an analog fashion, exchanging a gradient of signals, or “weights,” much like neurons that activate in various ways depending on the type and number of ions that flow across a synapse.

In this way, small neuromorphic chips could, like the brain, efficiently process millions of streams of parallel computations that are currently only possible with large banks of supercomputers. But one significant hangup on the way to such portable artificial intelligence has been the neural synapse, which has been particularly tricky to reproduce in hardware.

Now engineers at MIT [Massachusetts Institute of Technology] have designed an artificial synapse in such a way that they can precisely control the strength of an electric current flowing across it, similar to the way ions flow between neurons. The team has built a small chip with artificial synapses, made from silicon germanium. In simulations, the researchers found that the chip and its synapses could be used to recognize samples of handwriting, with 95 percent accuracy.

A January 22, 2018 MIT news release by Jennifer Chua (also on EurekAlert), which originated the news item, provides more detail about the research,

The design, published today [January 22, 2018] in the journal Nature Materials, is a major step toward building portable, low-power neuromorphic chips for use in pattern recognition and other learning tasks.

The research was led by Jeehwan Kim, the Class of 1947 Career Development Assistant Professor in the departments of Mechanical Engineering and Materials Science and Engineering, and a principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories. His co-authors are Shinhyun Choi (first author), Scott Tan (co-first author), Zefan Li, Yunjo Kim, Chanyeol Choi, and Hanwool Yeon of MIT, along with Pai-Yu Chen and Shimeng Yu of Arizona State University.

Too many paths

Most neuromorphic chip designs attempt to emulate the synaptic connection between neurons using two conductive layers separated by a “switching medium,” or synapse-like space. When a voltage is applied, ions should move in the switching medium to create conductive filaments, similarly to how the “weight” of a synapse changes.

But it’s been difficult to control the flow of ions in existing designs. Kim says that’s because most switching mediums, made of amorphous materials, have unlimited possible paths through which ions can travel — a bit like Pachinko, a mechanical arcade game that funnels small steel balls down through a series of pins and levers, which act to either divert or direct the balls out of the machine.

Like Pachinko, existing switching mediums contain multiple paths that make it difficult to predict where ions will make it through. Kim says that can create unwanted nonuniformity in a synapse’s performance.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” Kim says. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects. This stream is changing, and it’s hard to control. That’s the biggest problem — nonuniformity of the artificial synapse.”

A perfect mismatch

Instead of using amorphous materials as an artificial synapse, Kim and his colleagues looked to single-crystalline silicon, a defect-free conducting material made from atoms arranged in a continuously ordered alignment. The team sought to create a precise, one-dimensional line defect, or dislocation, through the silicon, through which ions could predictably flow.

To do so, the researchers started with a wafer of silicon, resembling, at microscopic resolution, a chicken-wire pattern. They then grew a similar pattern of silicon germanium — a material also used commonly in transistors — on top of the silicon wafer. Silicon germanium’s lattice is slightly larger than that of silicon, and Kim found that together, the two perfectly mismatched materials can form a funnel-like dislocation, creating a single path through which ions can flow.

The researchers fabricated a neuromorphic chip consisting of artificial synapses made from silicon germanium, each synapse measuring about 25 nanometers across. They applied voltage to each synapse and found that all synapses exhibited more or less the same current, or flow of ions, with about a 4 percent variation between synapses — a much more uniform performance compared with synapses made from amorphous material.

They also tested a single synapse over multiple trials, applying the same voltage over 700 cycles, and found the synapse exhibited the same current, with just 1 percent variation from cycle to cycle.

“This is the most uniform device we could achieve, which is the key to demonstrating artificial neural networks,” Kim says.

Writing, recognized

As a final test, Kim’s team explored how its device would perform if it were to carry out actual learning tasks — specifically, recognizing samples of handwriting, which researchers consider to be a first practical test for neuromorphic chips. Such chips would consist of “input/hidden/output neurons,” each connected to other “neurons” via filament-based artificial synapses.

Scientists believe such stacks of neural nets can be made to “learn.” For instance, when fed an input that is a handwritten ‘1,’ with an output that labels it as ‘1,’ certain output neurons will be activated by input neurons and weights from an artificial synapse. When more examples of handwritten ‘1s’ are fed into the same chip, the same output neurons may be activated when they sense similar features between different samples of the same letter, thus “learning” in a fashion similar to what the brain does.

Kim and his colleagues ran a computer simulation of an artificial neural network consisting of three sheets of neural layers connected via two layers of artificial synapses, the properties of which they based on measurements from their actual neuromorphic chip. They fed into their simulation tens of thousands of samples from a handwritten recognition dataset commonly used by neuromorphic designers, and found that their neural network hardware recognized handwritten samples 95 percent of the time, compared to the 97 percent accuracy of existing software algorithms.

The team is in the process of fabricating a working neuromorphic chip that can carry out handwriting-recognition tasks, not in simulation but in reality. Looking beyond handwriting, Kim says the team’s artificial synapse design will enable much smaller, portable neural network devices that can perform complex computations that currently are only possible with large supercomputers.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” Kim says. “This opens a stepping stone to produce real artificial hardware.”

This research was supported in part by the National Science Foundation.

Here’s a link to and a citation for the paper,

SiGe epitaxial memory for neuromorphic computing with reproducible high performance based on engineered dislocations by Shinhyun Choi, Scott H. Tan, Zefan Li, Yunjo Kim, Chanyeol Choi, Pai-Yu Chen, Hanwool Yeon, Shimeng Yu, & Jeehwan Kim. Nature Materials (2018) doi:10.1038/s41563-017-0001-5 Published online: 22 January 2018

This paper is behind a paywall.

For the curious I have included a number of links to recent ‘memristor’ postings here,

January 22, 2018: Memristors at Masdar

January 3, 2018: Mott memristor

August 24, 2017: Neuristors and brainlike computing

June 28, 2017: Dr. Wei Lu and bio-inspired ‘memristor’ chips

May 2, 2017: Predicting how a memristor functions

December 30, 2016: Changing synaptic connectivity with a memristor

December 5, 2016: The memristor as computing device

November 1, 2016: The memristor as the ‘missing link’ in bioelectronic medicine?

You can find more by using ‘memristor’ as the search term in the blog search function or on the search engine of your choice.

Graphite ‘gold’ rush?

Someone in Germany (I think) is very excited about graphite, more specifically, there’s excitement around graphite flakes located in the province of Québec, Canada. Although, the person who wrote this news release might have wanted to run a search for ‘graphite’ and ‘gold rush’. The last graphite gold rush seems to have taken place in 2013.

Here’s the March 1, 2018 news release on PR Newswire (Cision), Note: Some links have been removed),

PALM BEACH, Florida, March 1, 2018 /PRNewswire/ —

MarketNewsUpdates.com News Commentary

Much like the gold rush in North America in the 1800s, people are going out in droves searching for a different kind of precious metal, graphite. The thing your third grade pencils were made of is now one of the hottest commodities on the market. This graphite is not being mined by your run-of-the-mill old-timey soot covered prospectors anymore. Big mining companies are all looking for this important resource integral to the production of lithium ion batteries due to the rise in popularity of electric cars. These players include Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), Teck Resources Limited (NYSE: TECK), Nemaska Lithium (TSX: NMX), Lithium Americas Corp. (TSX: LAC), and Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF).

These companies looking to manufacturer their graphite-based products, have seen steady positive growth over the past year. Their development of cutting-edge new products seems to be paying off. But in order to continue innovating, these companies need the graphite to do it. One junior miner looking to capitalize on the growing demand for this commodity is Graphite Energy Corp.

Graphite Energy is a mining company, that is focused on developing graphite resources. Graphite Energy’s state-of-the-art mining technology is friendly to the environment and has indicate graphite carbon (Cg) in the range of 2.20% to 22.30% with average 10.50% Cg from their Lac Aux Bouleaux Graphite Property in Southern Quebec [Canada].

Not Just Any Graphite Will Do

Graphite is one of the most in demand technology metals that is required for a green and sustainable world. Demand is only set to increase as the need for lithium ion batteries grows, fueled by the popularity of electric vehicles. However, not all graphite is created equal. The price of natural graphite has more than doubled since 2013 as companies look to maintain environmental standards which the use of synthetic graphite cannot provide due to its pollutant manufacturing process. Synthetic graphite is also very expensive to produce, deriving from petroleum and costing up to ten times as much as natural graphite. Therefore manufacturers are interested in increasing the proportion of natural graphite in their products in order to lower their costs.

High-grade large flake graphite is the solution to the environmental issues these companies are facing. But there is only so much supply to go around. Recent news by Graphite Energy Corp. on February 26th [2018] showed promising exploratory results. The announcement of the commencement of drilling is a positive step forward to meeting this increased demand.

Everything from batteries to solar panels need to be made with this natural high-grade flake graphite because what is the point of powering your home with the sun or charging your car if the products themselves do more harm than good to the environment when produced. However, supply consistency remains an issue since mines have different raw material impurities which vary from mine to mine. Certain types of battery technology already require graphite to be almost 100% pure. It is very possible that the purity requirements will increase in the future.

Natural graphite is also the basis of graphene, the uses of which seem limited only by scientists’ imaginations, given the host of new applications announced daily. In a recent study by ResearchSEA, a team from the Ocean University of China and Yunnan Normal University developed a highly efficient dye-sensitized solar cell using a graphene layer. This thin layer of graphene will allow solar panels to generate electricity when it rains.

Graphite Energy Is Keeping It Green

Whether it’s the graphite for the solar panels that will power the homes of tomorrow, or the lithium ion batteries that will fuel the latest cars, these advancements need to made in an environmentally conscious way. Mining companies like Graphite Energy Corp. specialize in the production of environmentally friendly graphite. The company will be producing its supply of natural graphite with the lowest environmental footprint possible.

From Saltwater To Clean Water Using Graphite

The world’s freshwater supply is at risk of running out. In order to mitigate this global disaster, worldwide spending on desalination technology was an estimated $16.6 billion in 2016. Due to the recent intense droughts in California, the state has accelerated the construction of desalination plants. However, the operating costs and the impact on the environment due to energy requirements for the process, is hindering any real progress in the space, until now.

Jeffrey Grossman, a professor at MIT’s [Massachusetts Institute of Technology, United States] Department of Materials Science and Engineering (DMSE), has been looking into whether graphite/graphene might reduce the cost of desalination.

“A billion people around the world lack regular access to clean water, and that’s expected to more than double in the next 25 years,” Grossman says. “Desalinated water costs five to 10 times more than regular municipal water, yet we’re not investing nearly enough money into research. If we don’t have clean energy we’re in serious trouble, but if we don’t have water we die.”

Grossman’s lab has demonstrated strong results showing that new filters made from graphene could greatly improve the energy efficiency of desalination plants while potentially reducing other costs as well.

Graphite/Graphene producers like Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE) are moving quickly to provide the materials necessary to develop this new generation of desalination plants.

Potential Comparables

Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF) Cruz Cobalt Corp. is cobalt mining company involved in the identification, acquisition and exploration of mineral properties. The company’s geographical segments include the United States and Canada. They are focused on acquiring and developing high-grade Cobalt projects in politically stable, environmentally responsible and ethical mining jurisdictions, essential for the rapidly growing rechargeable battery and renewable energy.

Nemaska Lithium (TSE: NMX.TO)

Nemaska Lithium is lithium mining company. The company is a supplier of lithium hydroxide and lithium carbonate to the emerging lithium battery market that is largely driven by electric vehicles. Nemaska mining operations are located in the mining friendly jurisdiction of Quebec, Canada. Nemaska Lithium has received a notice of allowance of a main patent application on its proprietary process to produce lithium hydroxide and lithium carbonate.

Lithium Americas Corp. (TSX: LAC.TO)

Lithium Americas is developing one of North America’s largest lithium deposits in northern Nevada. It operates nearly two lithium projects namely Cauchari-Olaroz project which is located in Argentina, and the Lithium Nevada project located in Nevada. The company manufactures specialty organoclay products, derived from clays, for sale to the oil and gas and other sectors.

Teck Resources Limited (NYSE: TECK)

Teck Resources Limited is a Canadian metals and mining company.Teck’s principal products include coal, copper, zinc, with secondary products including lead, silver, gold, molybdenum, germanium, indium and cadmium. Teck’s diverse resources focuses on providing products that are essential to building a better quality of life for people around the globe.

Graphite Mining Today For A Better Tomorrow

Graphite mining will forever be intertwined with the latest advancements in science and technology. Graphite deserves attention for its various use cases in automotive, energy, aerospace and robotics industries. In order for these and other industries to become sustainable and environmentally friendly, a reliance on graphite is necessary. Therefore, this rapidly growing sector has the potential to fuel investor interest in the mining space throughout 2018. The near limitless uses of graphite has the potential to impact every facet of our lives. Companies like Graphite Energy Corp. (OTC: GRXXF); (CSE: GRE) is at the forefront in this technological revolution.

For more information on Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), please visit streetsignals.com for a free research report.

Streetsignals.com (SS) is the source of the Article and content set forth above. References to any issuer other than the profiled issuer are intended solely to identify industry participants and do not constitute an endorsement of any issuer and do not constitute a comparison to the profiled issuer. FN Media Group (FNM) is a third-party publisher and news dissemination service provider, which disseminates electronic information through multiple online media channels. FNM is NOT affiliated with SS or any company mentioned herein. The commentary, views and opinions expressed in this release by SS are solely those of SS and are not shared by and do not reflect in any manner the views or opinions of FNM. Readers of this Article and content agree that they cannot and will not seek to hold liable SS and FNM for any investment decisions by their readers or subscribers. SS and FNM and their respective affiliated companies are a news dissemination and financial marketing solutions provider and are NOT registered broker-dealers/analysts/investment advisers, hold no investment licenses and may NOT sell, offer to sell or offer to buy any security.

The Article and content related to the profiled company represent the personal and subjective views of the Author (SS), and are subject to change at any time without notice. The information provided in the Article and the content has been obtained from sources which the Author believes to be reliable. However, the Author (SS) has not independently verified or otherwise investigated all such information. None of the Author, SS, FNM, or any of their respective affiliates, guarantee the accuracy or completeness of any such information. This Article and content are not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action; readers are strongly urged to speak with their own investment advisor and review all of the profiled issuer’s filings made with the Securities and Exchange Commission before making any investment decisions and should understand the risks associated with an investment in the profiled issuer’s securities, including, but not limited to, the complete loss of your investment. FNM was not compensated by any public company mentioned herein to disseminate this press release but was compensated seventy six hundred dollars by SS, a non-affiliated third party to distribute this release on behalf of Graphite Energy Corp.

FNM HOLDS NO SHARES OF ANY COMPANY NAMED IN THIS RELEASE.

This release contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. “Forward-looking statements” describe future expectations, plans, results, or strategies and are generally preceded by words such as “may”, “future”, “plan” or “planned”, “will” or “should”, “expected,” “anticipates”, “draft”, “eventually” or “projected”. You are cautioned that such statements are subject to a multitude of risks and uncertainties that could cause future circumstances, events, or results to differ materially from those projected in the forward-looking statements, including the risks that actual results may differ materially from those projected in the forward-looking statements as a result of various factors, and other risks identified in a company’s annual report on Form 10-K or 10-KSB and other filings made by such company with the Securities and Exchange Commission. You should consider these factors in evaluating the forward-looking statements included herein, and not place undue reliance on such statements. The forward-looking statements in this release are made as of the date hereof and SS and FNM undertake no obligation to update such statements.

Media Contact:

FN Media Group, LLC
info@marketnewsupdates.com
+1(561)325-8757

SOURCE MarketNewsUpdates.com

Hopefully my insertions of ‘Canada’ and the ‘United States’ help to clarify matters. North America and the United States are not synonyms although they are sometimes used synonymously.

There is another copy of this news release on Wall Street Online (Deutschland), both in English and German.By the way, that was my first clue that there might be some German interest. The second clue was the Graphite Energy Corp. homepage. Unusually for a company with ‘headquarters’ in the Canadian province of British Columbia, there’s an option to read the text in German.

Graphite Energy Corp. seems to be a relatively new player in the ‘rush’ to mine graphite flakes for use in graphene-based applications. One of my first posts about mining for graphite flakes was a July 26, 2011 posting concerning Northern Graphite and their mining operation (Bissett Creek) in Ontario. I don’t write about them often but they are still active if their news releases are to be believed. The latest was issued February 28, 2018 and offers “financial metrics for the Preliminary Economic Assessment (the “PEA”) on the Company’s 100% owned Bissett Creek graphite project.”

The other graphite mining company mentioned here is Lomiko Metals. The latest posting here about Lomiko is a December 23, 2015 piece regarding an analysis and stock price recommendation by a company known as SeeThruEquity. Like Graphite Energy Corp., Lomiko’s mines are located in Québec and their business headquarters in British Columbia. Lomiko has a March 16, 2018 news release announcing its reinstatement for trading on the TSX (Toronto Stock Exchange),

(Vancouver, B.C.) Lomiko Metals Inc. (“Lomiko”) (“Lomiko”) (TSX-V: LMR, OTC: LMRMF, FSE: DH8C) announces it has been successful in its reinstatement application with the TSX Venture Exchange and trading will begin at the opening on Tuesday, March 20, 2018.

Getting back to the flakes, here’s more about Graphite Energy Corp.’s mine (from the About Lac Aux Bouleaux webpage),

Lac Aux Bouleaux

The Lac Aux Bouleaux Property is comprised of 14 mineral claims in one contiguous block totaling 738.12 hectares land on NTS 31J05, near the town of Mont-Laurier in southern Québec. Lac Aux Bouleaux “LAB” is a world class graphite property that borders the only producing graphite in North America [Note: There are three countries in North America, Canada, the United States, and Mexico. Québec is in Canada.]. On the property we have a full production facility already built which includes an open pit mine, processing facility, tailings pond, power and easy access to roads.

High Purity Levels

An important asset of LAB is its metallurgy. The property contains a high proportion of large and jumbo flakes from which a high purity concentrate was proven to be produced across all flakes by a simple flotation process. The concentrate can then be further purified using the province’s green and affordable hydro-electricity to be used in lithium-ion batteries.

The geological work performed in order to verify the existing data consisted of visiting approachable graphite outcrops, historical exploration and development work on the property. Large flake graphite showings located on the property were confirmed with flake size in the range of 0.5 to 2 millimeters, typically present in shear zones at the contact of gneisses and marbles where the graphite content usually ranges from 2% to 20%. The results of the property are outstanding showing to have jumbo flake natural graphite.

An onsite mill structure, a tailing dam facility, and a historical open mining pit is already present and constructed on the property. The property is ready to be put into production based on the existing infrastructure already built. The company would hope to be able to ship by rail its mined graphite directly to Teslas Gigafactory being built in Nevada [United States] which will produce 35GWh of batteries annually by 2020.

Adjacent Properties

The property is located in a very active graphite exploration and production area, adjacent to the south of TIMCAL’s Lac des Iles graphite mine in Quebec which is a world class deposit producing 25,000 tonnes of graphite annually. There are several graphite showings and past producing mines in its vicinity, including a historic deposit located on the property.

The open pit mine in operation since 1989 with an onsite plant ranked 5th in the world production of graphite. The mine is operated by TIMCAL Graphite & Carbon which is a subsidiary of Imerys S.A., a French multinational company. The mine has an average grade of 7.5% Cg (graphite carbon) and has been producing 50 different graphite products for various graphite end users around the globe.

Canadians! We have great flakes!

Tracking artificial intelligence

Researchers at Stanford University have developed an index for measuring (tracking) the progress made by artificial intelligence (AI) according to a January 9, 2018 news item on phys.org (Note: Links have been removed),

Since the term “artificial intelligence” (AI) was first used in print in 1956, the one-time science fiction fantasy has progressed to the very real prospect of driverless cars, smartphones that recognize complex spoken commands and computers that see.

In an effort to track the progress of this emerging field, a Stanford-led group of leading AI thinkers called the AI100 has launched an index that will provide a comprehensive baseline on the state of artificial intelligence and measure technological progress in the same way the gross domestic product and the S&P 500 index track the U.S. economy and the broader stock market.

For anyone curious about the AI100 initiative, I have a description of it in my Sept. 27, 2016 post highlighting the group’s first report or you can keep on reading.

Getting back to the matter at hand, a December 21, 2017 Stanford University press release by Andrew Myers, which originated the news item, provides more detail about the AI index,

“The AI100 effort realized that in order to supplement its regular review of AI, a more continuous set of collected metrics would be incredibly useful,” said Russ Altman, a professor of bioengineering and the faculty director of AI100. “We were very happy to seed the AI Index, which will inform the AI100 as we move forward.”

The AI100 was set in motion three years ago when Eric Horvitz, a Stanford alumnus and former president of the Association for the Advancement of Artificial Intelligence, worked with his wife, Mary Horvitz, to define and endow the long-term study. Its first report, released in the fall of 2016, sought to anticipate the likely effects of AI in an urban environment in the year 2030.

Among the key findings in the new index are a dramatic increase in AI startups and investment as well as significant improvements in the technology’s ability to mimic human performance.

Baseline metrics

The AI Index tracks and measures at least 18 independent vectors in academia, industry, open-source software and public interest, plus technical assessments of progress toward what the authors call “human-level performance” in areas such as speech recognition, question-answering and computer vision – algorithms that can identify objects and activities in 2D images. Specific metrics in the index include evaluations of academic papers published, course enrollment, AI-related startups, job openings, search-term frequency and media mentions, among others.

“In many ways, we are flying blind in our discussions about artificial intelligence and lack the data we need to credibly evaluate activity,” said Yoav Shoham, professor emeritus of computer science.

“The goal of the AI Index is to provide a fact-based measuring stick against which we can chart progress and fuel a deeper conversation about the future of the field,” Shoham said.

Shoham conceived of the index and assembled a steering committee including Ray Perrault from SRI International, Erik Brynjolfsson of the Massachusetts Institute of Technology and Jack Clark from OpenAI. The committee subsequently hired Calvin LeGassick as project manager.

“The AI Index will succeed only if it becomes a community effort,” Shoham said.

Although the authors say the AI Index is the first index to track either scientific or technological progress, there are many other non-financial indexes that provide valuable insight into equally hard-to-quantify fields. Examples include the Social Progress Index, the Middle East peace index and the Bangladesh empowerment index, which measure factors as wide-ranging as nutrition, sanitation, workload, leisure time, public sentiment and even public speaking opportunities.

Intriguing findings

Among the findings of this inaugural index is that the number of active AI startups has increased 14-fold since 2000. Venture capital investment has increased six times in the same period. In academia, publishing in AI has increased a similarly impressive nine times in the last 20 years while course enrollment has soared. Enrollment in the introductory AI-related machine learning course at Stanford, for instance, has grown 45-fold in the last 30 years.

In technical metrics, image and speech recognition are both approaching, if not surpassing, human-level performance. The authors noted that AI systems have excelled in such real-world applications as object detection, the ability to understand and answer questions and classification of photographic images of skin cancer cells

Shoham noted that the report is still very U.S.-centric and will need a greater international presence as well as a greater diversity of voices. He said he also sees opportunities to fold in government and corporate investment in addition to the venture capital funds that are currently included.

In terms of human-level performance, the AI Index suggests that in some ways AI has already arrived. This is true in game-playing applications including chess, the Jeopardy! game show and, most recently, the game of Go. Nonetheless, the authors note that computers continue to lag considerably in the ability to generalize specific information into deeper meaning.

“AI has made truly amazing strides in the past decade,” Shoham said, “but computers still can’t exhibit the common sense or the general intelligence of even a 5-year-old.”

The AI Index was made possible by funding from AI100, Google, Microsoft and Toutiao. Data supporting the various metrics were provided by Elsevier, TrendKite, Indeed.com, Monster.com, the Google Trends Team, the Google Brain Team, Sand Hill Econometrics, VentureSource, Crunchbase, Electronic Frontier Foundation, EuroMatrix, Geoff Sutcliffe, Kevin Leyton-Brown and Holger Hoose.

You can find the AI Index here. They’re featuring their 2017 report but you can also find data (on the menu bar on the upper right side of your screen), along with a few provisos. I was curious as to whether any AI had been used to analyze the data and/or write the report. A very cursory look at the 2017 report did not answer that question. I’m fascinated by the failure to address what I think is an obvious question. It suggests that even very, very bright people can become blind and I suspect that’s why the group seems quite eager to get others involved, from the 2017 AI Index Report,

As the report’s limitations illustrate, the AI Index will always paint a partial picture. For this reason, we include subjective commentary from a cross-section of AI experts. This Expert Forum helps animate the story behind the data in the report and adds interpretation the report lacks.

Finally, where the experts’ dialogue ends, your opportunity to Get Involved begins [emphasis mine]. We will need the feedback and participation of a larger community to address the issues identified in this report, uncover issues we have omitted, and build a productive process for tracking activity and progress in Artificial Intelligence. (p. 8)

Unfortunately, it’s not clear how one becomes involved. Is there a forum or do you get in touch with one of the team leaders?

I wish them good luck with their project and imagine that these minor hiccups will be dealt with in near term.

The devil’s (i.e., luciferase) in the bioluminescent plant

The American Chemical Society (ACS) and the Massachusetts Institute of Technology (MIT) have both issued news releases about the latest in bioluminescence.The researchers tested their work on watercress, a vegetable that was viewed in almost sacred terms in my family; it was not easily available in Vancouver (Canada) when I was child.

My father would hunt down fresh watercress by checking out the Chinese grocery stores. He could spot the fresh stuff from across the street while driving at 30 miles or more per hour. Spotting it entailed an immediate hunt for parking (my father hated to pay so we might have go around the block a few times or more) and a dash out of the car to ensure that he got his watercress before anyone else spotted it. These days it’s much more easily available and, thankfully, my father has passed on so he won’t have to think about glowing watercress.

Getting back to bioluninescent vegetable research, the American Chemical Society’s Dec. 13, 2017 news release on EurekAlert (and as a Dec. 13, 2017 news item on ScienceDaily) makes the announcement,

The 2009 film “Avatar” created a lush imaginary world, illuminated by magical, glowing plants. Now researchers are starting to bring this spellbinding vision to life to help reduce our dependence on artificial lighting. They report in ACS’ journal Nano Letters a way to infuse plants with the luminescence of fireflies.

Nature has produced many bioluminescent organisms, however, plants are not among them. Most attempts so far to create glowing greenery — decorative tobacco plants in particular — have relied on introducing the genes of luminescent bacteria or fireflies through genetic engineering. But getting all the right components to the right locations within the plants has been a challenge. To gain better control over where light-generating ingredients end up, Michael S. Strano and colleagues recently created nanoparticles that travel to specific destinations within plants. Building on this work, the researchers wanted to take the next step and develop a “nanobionic,” glowing plant.

The team infused watercress and other plants with three different nanoparticles in a pressurized bath. The nanoparticles were loaded with light-emitting luciferin; luciferase, which modifies luciferin and makes it glow; and coenzyme A, which boosts luciferase activity. Using size and surface charge to control where the sets of nanoparticles could go within the plant tissues, the researchers could optimize how much light was emitted. Their watercress was half as bright as a commercial 1 microwatt LED and 100,000 times brighter than genetically engineered tobacco plants. Also, the plant could be turned off by adding a compound that blocks luciferase from activating luciferin’s glow.

Here’s a video from MIT detailing their research,

A December 13, 2017 MIT news release (also on EurekAlert) casts more light on the topic (I couldn’t resist the word play),

Imagine that instead of switching on a lamp when it gets dark, you could read by the light of a glowing plant on your desk.

MIT engineers have taken a critical first step toward making that vision a reality. By embedding specialized nanoparticles into the leaves of a watercress plant, they induced the plants to give off dim light for nearly four hours. They believe that, with further optimization, such plants will one day be bright enough to illuminate a workspace.

“The vision is to make a plant that will function as a desk lamp — a lamp that you don’t have to plug in. The light is ultimately powered by the energy metabolism of the plant itself,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the study

This technology could also be used to provide low-intensity indoor lighting, or to transform trees into self-powered streetlights, the researchers say.

MIT postdoc Seon-Yeong Kwak is the lead author of the study, which appears in the journal Nano Letters.

Nanobionic plants

Plant nanobionics, a new research area pioneered by Strano’s lab, aims to give plants novel features by embedding them with different types of nanoparticles. The group’s goal is to engineer plants to take over many of the functions now performed by electrical devices. The researchers have previously designed plants that can detect explosives and communicate that information to a smartphone, as well as plants that can monitor drought conditions.

Lighting, which accounts for about 20 percent of worldwide energy consumption, seemed like a logical next target. “Plants can self-repair, they have their own energy, and they are already adapted to the outdoor environment,” Strano says. “We think this is an idea whose time has come. It’s a perfect problem for plant nanobionics.”

To create their glowing plants, the MIT team turned to luciferase, the enzyme that gives fireflies their glow. Luciferase acts on a molecule called luciferin, causing it to emit light. Another molecule called co-enzyme A helps the process along by removing a reaction byproduct that can inhibit luciferase activity.

The MIT team packaged each of these three components into a different type of nanoparticle carrier. The nanoparticles, which are all made of materials that the U.S. Food and Drug Administration classifies as “generally regarded as safe,” help each component get to the right part of the plant. They also prevent the components from reaching concentrations that could be toxic to the plants.

The researchers used silica nanoparticles about 10 nanometers in diameter to carry luciferase, and they used slightly larger particles of the polymers PLGA and chitosan to carry luciferin and coenzyme A, respectively. To get the particles into plant leaves, the researchers first suspended the particles in a solution. Plants were immersed in the solution and then exposed to high pressure, allowing the particles to enter the leaves through tiny pores called stomata.

Particles releasing luciferin and coenzyme A were designed to accumulate in the extracellular space of the mesophyll, an inner layer of the leaf, while the smaller particles carrying luciferase enter the cells that make up the mesophyll. The PLGA particles gradually release luciferin, which then enters the plant cells, where luciferase performs the chemical reaction that makes luciferin glow.

The researchers’ early efforts at the start of the project yielded plants that could glow for about 45 minutes, which they have since improved to 3.5 hours. The light generated by one 10-centimeter watercress seedling is currently about one-thousandth of the amount needed to read by, but the researchers believe they can boost the light emitted, as well as the duration of light, by further optimizing the concentration and release rates of the components.

Plant transformation

Previous efforts to create light-emitting plants have relied on genetically engineering plants to express the gene for luciferase, but this is a laborious process that yields extremely dim light. Those studies were performed on tobacco plants and Arabidopsis thaliana, which are commonly used for plant genetic studies. However, the method developed by Strano’s lab could be used on any type of plant. So far, they have demonstrated it with arugula, kale, and spinach, in addition to watercress.

For future versions of this technology, the researchers hope to develop a way to paint or spray the nanoparticles onto plant leaves, which could make it possible to transform trees and other large plants into light sources.

“Our target is to perform one treatment when the plant is a seedling or a mature plant, and have it last for the lifetime of the plant,” Strano says. “Our work very seriously opens up the doorway to streetlamps that are nothing but treated trees, and to indirect lighting around homes.”

The researchers have also demonstrated that they can turn the light off by adding nanoparticles carrying a luciferase inhibitor. This could enable them to eventually create plants that shut off their light emission in response to environmental conditions such as sunlight, the researchers say.

Here’s a link to and a citation for the paper,

A Nanobionic Light-Emitting Plant by Seon-Yeong Kwak, Juan Pablo Giraldo, Min Hao Wong, Volodymyr B. Koman, Tedrick Thomas Salim Lew, Jon Ell, Mark C. Weidman, Rosalie M. Sinclair, Markita P. Landry, William A. Tisdale, and Michael S. Strano. Nano Lett., 2017, 17 (12), pp 7951–7961 DOI: 10.1021/acs.nanolett.7b04369 Publication Date (Web): November 17, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

A 3D printed ‘living’ tattoo

MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells. Courtesy of the researchers [and MIT]

If that image isn’t enough, there’s also a video abstract (I don’t think I’ve seen one of these before) for the paper,

For those who’d still like to read the text, here’s more from a December 5, 2017 MIT (Massachusetts Institute of Technology) news release (also on EurekAlert),

MIT engineers have devised a 3-D printing technique that uses a new kind of ink made from genetically programmed living cells.

The cells are engineered to light up in response to a variety of stimuli. When mixed with a slurry of hydrogel and nutrients, the cells can be printed, layer by layer, to form three-dimensional, interactive structures and devices.

The team has then demonstrated its technique by printing a “living tattoo” — a thin, transparent patch patterned with live bacteria cells in the shape of a tree. Each branch of the tree is lined with cells sensitive to a different chemical or molecular compound. When the patch is adhered to skin that has been exposed to the same compounds, corresponding regions of the tree light up in response.

The researchers, led by Xuanhe Zhao, the Noyce Career Development Professor in MIT’s Department of Mechanical Engineering, and Timothy Lu, associate professor of biological engineering and of electrical engineering and computer science, say that their technique can be used to fabricate “active” materials for wearable sensors and interactive displays. Such materials can be patterned with live cells engineered to sense environmental chemicals and pollutants as well as changes in pH and temperature.

What’s more, the team developed a model to predict the interactions between cells within a given 3-D-printed structure, under a variety of conditions. The team says researchers can use the model as a guide in designing responsive living materials.

Zhao, Lu, and their colleagues have published their results today [December 5, 2017] in the journal Advanced Materials. The paper’s co-authors are graduate students Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, and postdoc Cesar de la Fuente-Nunez.

A hardy alternative

In recent years, scientists have explored a variety of responsive materials as the basis for 3D-printed inks. For instance, scientists have used inks made from temperature-sensitive polymers to print heat-responsive shape-shifting objects. Others have printed photoactivated structures from polymers that shrink and stretch in response to light.

Zhao’s team, working with bioengineers in Lu’s lab, realized that live cells might also serve as responsive materials for 3D-printed inks, particularly as they can be genetically engineered to respond to a variety of stimuli. The researchers are not the first to consider 3-D printing genetically engineered cells; others have attempted to do so using live mammalian cells, but with little success.

“It turns out those cells were dying during the printing process, because mammalian cells are basically lipid bilayer balloons,” Yuk says. “They are too weak, and they easily rupture.”

Instead, the team identified a hardier cell type in bacteria. Bacterial cells have tough cell walls that are able to survive relatively harsh conditions, such as the forces applied to ink as it is pushed through a printer’s nozzle. Furthermore, bacteria, unlike mammalian cells, are compatible with most hydrogels — gel-like materials that are made from a mix of mostly water and a bit of polymer. The group found that hydrogels can provide an aqueous environment that can support living bacteria.

The researchers carried out a screening test to identify the type of hydrogel that would best host bacterial cells. After an extensive search, a hydrogel with pluronic acid was found to be the most compatible material. The hydrogel also exhibited an ideal consistency for 3-D printing.

“This hydrogel has ideal flow characteristics for printing through a nozzle,” Zhao says. “It’s like squeezing out toothpaste. You need [the ink] to flow out of a nozzle like toothpaste, and it can maintain its shape after it’s printed.”

From tattoos to living computers

Lu provided the team with bacterial cells engineered to light up in response to a variety of chemical stimuli. The researchers then came up with a recipe for their 3-D ink, using a combination of bacteria, hydrogel, and nutrients to sustain the cells and maintain their functionality.

“We found this new ink formula works very well and can print at a high resolution of about 30 micrometers per feature,” Zhao says. “That means each line we print contains only a few cells. We can also print relatively large-scale structures, measuring several centimeters.”

They printed the ink using a custom 3-D printer that they built using standard elements combined with fixtures they machined themselves. To demonstrate the technique, the team printed a pattern of hydrogel with cells in the shape of a tree on an elastomer layer. After printing, they solidified, or cured, the patch by exposing it to ultraviolet radiation. They then adhere the transparent elastomer layer with the living patterns on it, to skin.

To test the patch, the researchers smeared several chemical compounds onto the back of a test subject’s hand, then pressed the hydrogel patch over the exposed skin. Over several hours, branches of the patch’s tree lit up when bacteria sensed their corresponding chemical stimuli.

The researchers also engineered bacteria to communicate with each other; for instance they programmed some cells to light up only when they receive a certain signal from another cell. To test this type of communication in a 3-D structure, they printed a thin sheet of hydrogel filaments with “input,” or signal-producing bacteria and chemicals, overlaid with another layer of filaments of an “output,” or signal-receiving bacteria. They found the output filaments lit up only when they overlapped and received input signals from corresponding bacteria .

Yuk says in the future, researchers may use the team’s technique to print “living computers” — structures with multiple types of cells that communicate with each other, passing signals back and forth, much like transistors on a microchip.

“This is very future work, but we expect to be able to print living computational platforms that could be wearable,” Yuk says.

For more near-term applications, the researchers are aiming to fabricate customized sensors, in the form of flexible patches and stickers that could be engineered to detect a variety of chemical and molecular compounds. They also envision their technique may be used to manufacture drug capsules and surgical implants, containing cells engineered produce compounds such as glucose, to be released therapeutically over time.

“We can use bacterial cells like workers in a 3-D factory,” Liu says. “They can be engineered to produce drugs within a 3-D scaffold, and applications should not be confined to epidermal devices. As long as the fabrication method and approach are viable, applications such as implants and ingestibles should be possible.”

Here’s a link to and a citation for the paper,

3D Printing of Living Responsive Materials and Devices by Xinyue Liu, Hyunwoo Yuk, Shaoting Lin, German Alberto Parada, Tzu-Chieh Tang, Eléonore Tham, Cesar de la Fuente-Nunez, Timothy K. Lu, and Xuanhe Zhao. Advanced Materials DOI: 10.1002/adma.201704821 Version of Record online: 5 DEC 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

A transatlantic report highlighting the risks and opportunities associated with synthetic biology and bioengineering

I love e-Life, the open access journal where its editors noted that a submitted synthetic biology and bioengineering report was replete with US and UK experts (along with a European or two) but no expert input from other parts of the world. In response the authors added ‘transatlantic’ to the title. It was a good decision since it was too late to add any new experts if the authors planned to have their paper published in the foreseeable future.

I’ve commented many times here when panels of experts include only Canadian, US, UK, and, sometimes, European or Commonwealth (Australia/New Zealand) experts that we need to broaden our perspectives and now I can add: or at least acknowledge (e.g. transatlantic) that the perspectives taken are reflective of a rather narrow range of countries.

Now getting to the report, here’s more from a November 21, 2017 University of Cambridge press release,

Human genome editing, 3D-printed replacement organs and artificial photosynthesis – the field of bioengineering offers great promise for tackling the major challenges that face our society. But as a new article out today highlights, these developments provide both opportunities and risks in the short and long term.

Rapid developments in the field of synthetic biology and its associated tools and methods, including more widely available gene editing techniques, have substantially increased our capabilities for bioengineering – the application of principles and techniques from engineering to biological systems, often with the goal of addressing ‘real-world’ problems.

In a feature article published in the open access journal eLife, an international team of experts led by Dr Bonnie Wintle and Dr Christian R. Boehm from the Centre for the Study of Existential Risk at the University of Cambridge, capture perspectives of industry, innovators, scholars, and the security community in the UK and US on what they view as the major emerging issues in the field.

Dr Wintle says: “The growth of the bio-based economy offers the promise of addressing global environmental and societal challenges, but as our paper shows, it can also present new kinds of challenges and risks. The sector needs to proceed with caution to ensure we can reap the benefits safely and securely.”

The report is intended as a summary and launching point for policy makers across a range of sectors to further explore those issues that may be relevant to them.

Among the issues highlighted by the report as being most relevant over the next five years are:

Artificial photosynthesis and carbon capture for producing biofuels

If technical hurdles can be overcome, such developments might contribute to the future adoption of carbon capture systems, and provide sustainable sources of commodity chemicals and fuel.

Enhanced photosynthesis for agricultural productivity

Synthetic biology may hold the key to increasing yields on currently farmed land – and hence helping address food security – by enhancing photosynthesis and reducing pre-harvest losses, as well as reducing post-harvest and post-consumer waste.

Synthetic gene drives

Gene drives promote the inheritance of preferred genetic traits throughout a species, for example to prevent malaria-transmitting mosquitoes from breeding. However, this technology raises questions about whether it may alter ecosystems [emphasis mine], potentially even creating niches where a new disease-carrying species or new disease organism may take hold.

Human genome editing

Genome engineering technologies such as CRISPR/Cas9 offer the possibility to improve human lifespans and health. However, their implementation poses major ethical dilemmas. It is feasible that individuals or states with the financial and technological means may elect to provide strategic advantages to future generations.

Defence agency research in biological engineering

The areas of synthetic biology in which some defence agencies invest raise the risk of ‘dual-use’. For example, one programme intends to use insects to disseminate engineered plant viruses that confer traits to the target plants they feed on, with the aim of protecting crops from potential plant pathogens – but such technologies could plausibly also be used by others to harm targets.

In the next five to ten years, the authors identified areas of interest including:

Regenerative medicine: 3D printing body parts and tissue engineering

While this technology will undoubtedly ease suffering caused by traumatic injuries and a myriad of illnesses, reversing the decay associated with age is still fraught with ethical, social and economic concerns. Healthcare systems would rapidly become overburdened by the cost of replenishing body parts of citizens as they age and could lead new socioeconomic classes, as only those who can pay for such care themselves can extend their healthy years.

Microbiome-based therapies

The human microbiome is implicated in a large number of human disorders, from Parkinson’s to colon cancer, as well as metabolic conditions such as obesity and type 2 diabetes. Synthetic biology approaches could greatly accelerate the development of more effective microbiota-based therapeutics. However, there is a risk that DNA from genetically engineered microbes may spread to other microbiota in the human microbiome or into the wider environment.

Intersection of information security and bio-automation

Advancements in automation technology combined with faster and more reliable engineering techniques have resulted in the emergence of robotic ‘cloud labs’ where digital information is transformed into DNA then expressed in some target organisms. This opens the possibility of new kinds of information security threats, which could include tampering with digital DNA sequences leading to the production of harmful organisms, and sabotaging vaccine and drug production through attacks on critical DNA sequence databases or equipment.

Over the longer term, issues identified include:

New makers disrupt pharmaceutical markets

Community bio-labs and entrepreneurial startups are customizing and sharing methods and tools for biological experiments and engineering. Combined with open business models and open source technologies, this could herald opportunities for manufacturing therapies tailored to regional diseases that multinational pharmaceutical companies might not find profitable. But this raises concerns around the potential disruption of existing manufacturing markets and raw material supply chains as well as fears about inadequate regulation, less rigorous product quality control and misuse.

Platform technologies to address emerging disease pandemics

Emerging infectious diseases—such as recent Ebola and Zika virus disease outbreaks—and potential biological weapons attacks require scalable, flexible diagnosis and treatment. New technologies could enable the rapid identification and development of vaccine candidates, and plant-based antibody production systems.

Shifting ownership models in biotechnology

The rise of off-patent, generic tools and the lowering of technical barriers for engineering biology has the potential to help those in low-resource settings, benefit from developing a sustainable bioeconomy based on local needs and priorities, particularly where new advances are made open for others to build on.

Dr Jenny Molloy comments: “One theme that emerged repeatedly was that of inequality of access to the technology and its benefits. The rise of open source, off-patent tools could enable widespread sharing of knowledge within the biological engineering field and increase access to benefits for those in developing countries.”

Professor Johnathan Napier from Rothamsted Research adds: “The challenges embodied in the Sustainable Development Goals will require all manner of ideas and innovations to deliver significant outcomes. In agriculture, we are on the cusp of new paradigms for how and what we grow, and where. Demonstrating the fairness and usefulness of such approaches is crucial to ensure public acceptance and also to delivering impact in a meaningful way.”

Dr Christian R. Boehm concludes: “As these technologies emerge and develop, we must ensure public trust and acceptance. People may be willing to accept some of the benefits, such as the shift in ownership away from big business and towards more open science, and the ability to address problems that disproportionately affect the developing world, such as food security and disease. But proceeding without the appropriate safety precautions and societal consensus—whatever the public health benefits—could damage the field for many years to come.”

The research was made possible by the Centre for the Study of Existential Risk, the Synthetic Biology Strategic Research Initiative (both at the University of Cambridge), and the Future of Humanity Institute (University of Oxford). It was based on a workshop co-funded by the Templeton World Charity Foundation and the European Research Council under the European Union’s Horizon 2020 research and innovation programme.

Here’s a link to and a citation for the paper,

A transatlantic perspective on 20 emerging issues in biological engineering by Bonnie C Wintle, Christian R Boehm, Catherine Rhodes, Jennifer C Molloy, Piers Millett, Laura Adam, Rainer Breitling, Rob Carlson, Rocco Casagrande, Malcolm Dando, Robert Doubleday, Eric Drexler, Brett Edwards, Tom Ellis, Nicholas G Evans, Richard Hammond, Jim Haseloff, Linda Kahl, Todd Kuiken, Benjamin R Lichman, Colette A Matthewman, Johnathan A Napier, Seán S ÓhÉigeartaigh, Nicola J Patron, Edward Perello, Philip Shapira, Joyce Tait, Eriko Takano, William J Sutherland. eLife; 14 Nov 2017; DOI: 10.7554/eLife.30247

This paper is open access and the editors have included their notes to the authors and the authors’ response.

You may have noticed that I highlighted a portion of the text concerning synthetic gene drives. Coincidentally I ran across a November 16, 2017 article by Ed Yong for The Atlantic where the topic is discussed within the context of a project in New Zealand, ‘Predator Free 2050’ (Note: A link has been removed),

Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.

Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.

But perhaps, they also represent the New Zealand that could be.

In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.

In 2014, Kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.

Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat’s color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.

Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.

These excerpts don’t do justice to this thought-provoking article. If you have time, I recommend reading it in its entirety  as it provides some insight into gene drives and, with some imagination on the reader’s part, the potential for the other technologies discussed in the report.

One last comment, I notice that Eric Drexler is cited as on the report’s authors. He’s familiar to me as K. Eric Drexler, the author of the book that popularized nanotechnology in the US and other countries, Engines of Creation (1986) .

Editing the genome with CRISPR ((clustered regularly interspaced short palindromic repeats)-carrying nanoparticles

MIT (Massachusetts Institute of Technology) researchers have developed a new nonviral means of delivering CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene therapy according to a November 13, 2017 news item on Nanowerk,

In a new study, MIT researchers have developed nanoparticles that can deliver the CRISPR genome-editing system and specifically modify genes in mice. The team used nanoparticles to carry the CRISPR components, eliminating the need to use viruses for delivery.

Using the new delivery technique, the researchers were able to cut out certain genes in about 80 percent of liver cells, the best success rate ever achieved with CRISPR in adult animals.

In a new study, MIT researchers have developed nanoparticles that can deliver the CRISPR genome-editing system and specifically modify genes, eliminating the need to use viruses for delivery. Image: MIT News

A November 13, 2017 MIT news release (also on EurekAlert), which originated the news item, provides more details about the research and a good description of and comparison between using a viral system and using a nanoparticle-based system to deliver CRISPR-CAS9,

“What’s really exciting here is that we’ve shown you can make a nanoparticle that can be used to permanently and specifically edit the DNA in the liver of an adult animal,” says Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES).

One of the genes targeted in this study, known as Pcsk9, regulates cholesterol levels. Mutations in the human version of the gene are associated with a rare disorder called dominant familial hypercholesterolemia, and the FDA recently approved two antibody drugs that inhibit Pcsk9. However these antibodies need to be taken regularly, and for the rest of the patient’s life, to provide therapy. The new nanoparticles permanently edit the gene following a single treatment, and the technique also offers promise for treating other liver disorders, according to the MIT team.

Anderson is the senior author of the study, which appears in the Nov. 13 [2017] issue of Nature Biotechnology. The paper’s lead author is Koch Institute research scientist Hao Yin. Other authors include David H. Koch Institute Professor Robert Langer of MIT, professors Victor Koteliansky and Timofei Zatsepin of the Skolkovo Institute of Science and Technology [Russia], and Professor Wen Xue of the University of Massachusetts Medical School.

Targeting disease

Many scientists are trying to develop safe and efficient ways to deliver the components needed for CRISPR, which consists of a DNA-cutting enzyme called Cas9 and a short RNA that guides the enzyme to a specific area of the genome, directing Cas9 where to make its cut.

In most cases, researchers rely on viruses to carry the gene for Cas9, as well as the RNA guide strand. In 2014, Anderson, Yin, and their colleagues developed a nonviral delivery system in the first-ever demonstration of curing a disease (the liver disorder tyrosinemia) with CRISPR in an adult animal. However, this type of delivery requires a high-pressure injection, a method that can also cause some damage to the liver.

Later, the researchers showed they could deliver the components without the high-pressure injection by packaging messenger RNA (mRNA) encoding Cas9 into a nanoparticle instead of a virus. Using this approach, in which the guide RNA was still delivered by a virus, the researchers were able to edit the target gene in about 6 percent of hepatocytes, which is enough to treat tyrosinemia.

While that delivery technique holds promise, in some situations it would be better to have a completely nonviral delivery system, Anderson says. One consideration is that once a particular virus is used, the patient will develop antibodies to it, so it couldn’t be used again. Also, some patients have pre-existing antibodies to the viruses being tested as CRISPR delivery vehicles.

In the new Nature Biotechnology paper, the researchers came up with a system that delivers both Cas9 and the RNA guide using nanoparticles, with no need for viruses. To deliver the guide RNAs, they first had to chemically modify the RNA to protect it from enzymes in the body that would normally break it down before it could reach its destination.

The researchers analyzed the structure of the complex formed by Cas9 and the RNA guide, or sgRNA, to figure out which sections of the guide RNA strand could be chemically modified without interfering with the binding of the two molecules. Based on this analysis, they created and tested many possible combinations of modifications.

“We used the structure of the Cas9 and sgRNA complex as a guide and did tests to figure out we can modify as much as 70 percent of the guide RNA,” Yin says. “We could heavily modify it and not affect the binding of sgRNA and Cas9, and this enhanced modification really enhances activity.”

Reprogramming the liver

The researchers packaged these modified RNA guides (which they call enhanced sgRNA) into lipid nanoparticles, which they had previously used to deliver other types of RNA to the liver, and injected them into mice along with nanoparticles containing mRNA that encodes Cas9.

They experimented with knocking out a few different genes expressed by hepatocytes, but focused most of their attention on the cholesterol-regulating Pcsk9 gene. The researchers were able to eliminate this gene in more than 80 percent of liver cells, and the Pcsk9 protein was undetectable in these mice. They also found a 35 percent drop in the total cholesterol levels of the treated mice.

The researchers are now working on identifying other liver diseases that might benefit from this approach, and advancing these approaches toward use in patients.

“I think having a fully synthetic nanoparticle that can specifically turn genes off could be a powerful tool not just for Pcsk9 but for other diseases as well,” Anderson says. “The liver is a really important organ and also is a source of disease for many people. If you can reprogram the DNA of your liver while you’re still using it, we think there are many diseases that could be addressed.”

“We are very excited to see this new application of nanotechnology open new avenues for gene editing,” Langer adds.

The research was funded by the National Institutes of Health (NIH), the Russian Scientific Fund, the Skoltech Center, and the Koch Institute Support (core) Grant from the National Cancer Institute.

Here’s a link to and a citation for the paper,

Structure-guided chemical modification of guide RNA enables potent non-viral in vivo genome editing by Hao Yin, Chun-Qing Song, Sneha Suresh, Qiongqiong Wu, Stephen Walsh, Luke Hyunsik Rhym, Esther Mintzer, Mehmet Fatih Bolukbasi, Lihua Julie Zhu, Kevin Kauffman, Haiwei Mou, Alicia Oberholzer, Junmei Ding, Suet-Yan Kwan, Roman L Bogorad, Timofei Zatsepin, Victor Koteliansky, Scot A Wolfe, Wen Xue, Robert Langer, & Daniel G Anderson. Nature Biotechnology doi:10.1038/nbt.4005 Published online: 13 November 2017

This paper is behind a paywall.

Gold’s origin in the universe due to cosmic collision

An hypothesis for gold’s origins was first mentioned here in a May 26, 2016 posting,

The link between this research and my side project on gold nanoparticles is a bit tenuous but this work on the origins for gold and other precious metals being found in the stars is so fascinating and I’m determined to find a connection.

An artist's impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

An artist’s impression of two neutron stars colliding. (Credit: Dana Berry / Skyworks Digital, Inc.) Courtesy: Kavli Foundation

From a May 19, 2016 news item on phys.org,

The origin of many of the most precious elements on the periodic table, such as gold, silver and platinum, has perplexed scientists for more than six decades. Now a recent study has an answer, evocatively conveyed in the faint starlight from a distant dwarf galaxy.

In a roundtable discussion, published today [May 19, 2016?], The Kavli Foundation spoke to two of the researchers behind the discovery about why the source of these heavy elements, collectively called “r-process” elements, has been so hard to crack.

From the Spring 2016 Kavli Foundation webpage hosting the  “Galactic ‘Gold Mine’ Explains the Origin of Nature’s Heaviest Elements” Roundtable ,

Astronomers studying a galaxy called Reticulum II have just discovered that its stars contain whopping amounts of these metals—collectively known as “r-process” elements (See “What is the R-Process?”). Of the 10 dwarf galaxies that have been similarly studied so far, only Reticulum II bears such strong chemical signatures. The finding suggests some unusual event took place billions of years ago that created ample amounts of heavy elements and then strew them throughout the galaxy’s reservoir of gas and dust. This r-process-enriched material then went on to form Reticulum II’s standout stars.

Based on the new study, from a team of researchers at the Kavli Institute at the Massachusetts Institute of Technology, the unusual event in Reticulum II was likely the collision of two, ultra-dense objects called neutron stars. Scientists have hypothesized for decades that these collisions could serve as a primary source for r-process elements, yet the idea had lacked solid observational evidence. Now armed with this information, scientists can further hope to retrace the histories of galaxies based on the contents of their stars, in effect conducting “stellar archeology.”

Researchers have confirmed the hypothesis according to an Oct. 16, 2017 news item on phys.org,

Gold’s origin in the Universe has finally been confirmed, after a gravitational wave source was seen and heard for the first time ever by an international collaboration of researchers, with astronomers at the University of Warwick playing a leading role.

Members of Warwick’s Astronomy and Astrophysics Group, Professor Andrew Levan, Dr Joe Lyman, Dr Sam Oates and Dr Danny Steeghs, led observations which captured the light of two colliding neutron stars, shortly after being detected through gravitational waves – perhaps the most eagerly anticipated phenomenon in modern astronomy.

Marina Koren’s Oct. 16, 2017 article for The Atlantic presents a richly evocative view (Note: Links have been removed),

Some 130 million years ago, in another galaxy, two neutron stars spiraled closer and closer together until they smashed into each other in spectacular fashion. The violent collision produced gravitational waves, cosmic ripples powerful enough to stretch and squeeze the fabric of the universe. There was a brief flash of light a million trillion times as bright as the sun, and then a hot cloud of radioactive debris. The afterglow hung for several days, shifting from bright blue to dull red as the ejected material cooled in the emptiness of space.

Astronomers detected the aftermath of the merger on Earth on August 17. For the first time, they could see the source of universe-warping forces Albert Einstein predicted a century ago. Unlike with black-hole collisions, they had visible proof, and it looked like a bright jewel in the night sky.

But the merger of two neutron stars is more than fireworks. It’s a factory.

Using infrared telescopes, astronomers studied the spectra—the chemical composition of cosmic objects—of the collision and found that the plume ejected by the merger contained a host of newly formed heavy chemical elements, including gold, silver, platinum, and others. Scientists estimate the amount of cosmic bling totals about 10,000 Earth-masses of heavy elements.

I’m not sure exactly what this image signifies but it did accompany Koren’s article so presumably it’s a representation of colliding neutron stars,

NSF / LIGO / Sonoma State University /A. Simonnet. Downloaded from: https://www.theatlantic.com/science/archive/2017/10/the-making-of-cosmic-bling/543030/

An Oct. 16, 2017 University of Warwick press release (also on EurekAlert), which originated the news item on phys.org, provides more detail,

Huge amounts of gold, platinum, uranium and other heavy elements were created in the collision of these compact stellar remnants, and were pumped out into the universe – unlocking the mystery of how gold on wedding rings and jewellery is originally formed.

The collision produced as much gold as the mass of the Earth. [emphasis mine]

This discovery has also confirmed conclusively that short gamma-ray bursts are directly caused by the merging of two neutron stars.

The neutron stars were very dense – as heavy as our Sun yet only 10 kilometres across – and they collided with each other 130 million years ago, when dinosaurs roamed the Earth, in a relatively old galaxy that was no longer forming many stars.

They drew towards each other over millions of light years, and revolved around each other increasingly quickly as they got closer – eventually spinning around each other five hundred times per second.

Their merging sent ripples through the fabric of space and time – and these ripples are the elusive gravitational waves spotted by the astronomers.

The gravitational waves were detected by the Advanced Laser Interferometer Gravitational-Wave Observatory (Adv-LIGO) on 17 August this year [2017], with a short duration gamma-ray burst detected by the Fermi satellite just two seconds later.

This led to a flurry of observations as night fell in Chile, with a first report of a new source from the Swope 1m telescope.

Longstanding collaborators Professor Levan and Professor Nial Tanvir (from the University of Leicester) used the facilities of the European Southern Observatory to pinpoint the source in infrared light.

Professor Levan’s team was the first one to get observations of this new source with the Hubble Space Telescope. It comes from a galaxy called NGC 4993, 130 million light years away.

Andrew Levan, Professor in the Astronomy & Astrophysics group at the University of Warwick, commented: “Once we saw the data, we realised we had caught a new kind of astrophysical object. This ushers in the era of multi-messenger astronomy, it is like being able to see and hear for the first time.”

Dr Joe Lyman, who was observing at the European Southern Observatory at the time was the first to alert the community that the source was unlike any seen before.

He commented: “The exquisite observations obtained in a few days showed we were observing a kilonova, an object whose light is powered by extreme nuclear reactions. This tells us that the heavy elements, like the gold or platinum in jewellery are the cinders, forged in the billion degree remnants of a merging neutron star.”

Dr Samantha Oates added: “This discovery has answered three questions that astronomers have been puzzling for decades: what happens when neutron stars merge? What causes the short duration gamma-ray bursts? Where are the heavy elements, like gold, made? In the space of about a week all three of these mysteries were solved.”

Dr Danny Steeghs said: “This is a new chapter in astrophysics. We hope that in the next few years we will detect many more events like this. Indeed, in Warwick we have just finished building a telescope designed to do just this job, and we expect it to pinpoint these sources in this new era of multi-messenger astronomy”.

Congratulations to all of the researchers involved in this work!

Many, many research teams were  involved. Here’s a sampling of their news releases which focus on their areas of research,

University of the Witwatersrand (South Africa)

https://www.eurekalert.org/pub_releases/2017-10/uotw-wti101717.php

Weizmann Institute of Science (Israel)

https://www.eurekalert.org/pub_releases/2017-10/wios-cns101717.php

Carnegie Institution for Science (US)

https://www.eurekalert.org/pub_releases/2017-10/cifs-dns101217.php

Northwestern University (US)

https://www.eurekalert.org/pub_releases/2017-10/nu-adc101617.php

National Radio Astronomy Observatory (US)

https://www.eurekalert.org/pub_releases/2017-10/nrao-ru101317.php

Max-Planck-Gesellschaft (Germany)

https://www.eurekalert.org/pub_releases/2017-10/m-gwf101817.php

Penn State (Pennsylvania State University; US)

https://www.eurekalert.org/pub_releases/2017-10/ps-stl101617.php

University of California – Davis

https://www.eurekalert.org/pub_releases/2017-10/uoc–cns101717.php

The American Association for the Advancement of Science’s (AAAS) magazine, Science, has published seven papers on this research. Here’s an Oct. 16, 2017 AAAS news release with an overview of the papers,

https://www.eurekalert.org/pub_releases/2017-10/aaft-btf101617.php

I’m sure there are more news releases out there and that there will be many more papers published in many journals, so if this interests, I encourage you to keep looking.

Two final pieces I’d like to draw your attention to: one answers basic questions and another focuses on how artists knew what to draw when neutron stars collide.

Keith A Spencer’s Oct. 18, 2017 piece on salon.com answers a lot of basic questions for those of us who don’t have a background in astronomy. Here are a couple of examples,

What is a neutron star?

Okay, you know how atoms have protons, neutrons, and electrons in them? And you know how protons are positively charged, and electrons are negatively charged, and neutrons are neutral?

Yeah, I remember that from watching Bill Nye as a kid.

Totally. Anyway, have you ever wondered why the negatively-charged electrons and the positively-charged protons don’t just merge into each other and form a neutral neutron? I mean, they’re sitting there in the atom’s nucleus pretty close to each other. Like, if you had two magnets that close, they’d stick together immediately.

I guess now that you mention it, yeah, it is weird.

Well, it’s because there’s another force deep in the atom that’s preventing them from merging.

It’s really really strong.

The only way to overcome this force is to have a huge amount of matter in a really hot, dense space — basically shove them into each other until they give up and stick together and become a neutron. This happens in very large stars that have been around for a while — the core collapses, and in the aftermath, the electrons in the star are so close to the protons, and under so much pressure, that they suddenly merge. There’s a big explosion and the outer material of the star is sloughed off.

Okay, so you’re saying under a lot of pressure and in certain conditions, some stars collapse and become big balls of neutrons?

Pretty much, yeah.

So why do the neutrons just stick around in a huge ball? Aren’t they neutral? What’s keeping them together? 

Gravity, mostly. But also the strong nuclear force, that aforementioned weird strong force. This isn’t something you’d encounter on a macroscopic scale — the strong force only really works at the type of distances typified by particles in atomic nuclei. And it’s different, fundamentally, than the electromagnetic force, which is what makes magnets attract and repel and what makes your hair stick up when you rub a balloon on it.

So these neutrons in a big ball are bound by gravity, but also sticking together by virtue of the strong nuclear force. 

So basically, the new ball of neutrons is really small, at least, compared to how heavy it is. That’s because the neutrons are all clumped together as if this neutron star is one giant atomic nucleus — which it kinda is. It’s like a giant atom made only of neutrons. If our sun were a neutron star, it would be less than 20 miles wide. It would also not be something you would ever want to get near.

Got it. That means two giant balls of neutrons that weighed like, more than our sun and were only ten-ish miles wide, suddenly smashed into each other, and in the aftermath created a black hole, and we are just now detecting it on Earth?

Exactly. Pretty weird, no?

Spencer does a good job of gradually taking you through increasingly complex explanations.

For those with artistic interests, Neel V. Patel tries to answer a question about how artists knew what draw when neutron stars collided in his Oct. 18, 2017 piece for Slate.com,

All of these things make this discovery easy to marvel at and somewhat impossible to picture. Luckily, artists have taken up the task of imagining it for us, which you’ve likely seen if you’ve already stumbled on coverage of the discovery. Two bright, furious spheres of light and gas spiraling quickly into one another, resulting in a massive swell of lit-up matter along with light and gravitational waves rippling off speedily in all directions, towards parts unknown. These illustrations aren’t just alluring interpretations of a rare phenomenon; they are, to some extent, the translation of raw data and numbers into a tangible visual that gives scientists and nonscientists alike some way of grasping what just happened. But are these visualizations realistic? Is this what it actually looked like? No one has any idea. Which is what makes the scientific illustrators’ work all the more fascinating.

“My goal is to represent what the scientists found,” says Aurore Simmonet, a scientific illustrator based at Sonoma State University in Rohnert Park, California. Even though she said she doesn’t have a rigorous science background (she certainly didn’t know what a kilonova was before being tasked to illustrate one), she also doesn’t believe that type of experience is an absolute necessity. More critical, she says, is for the artist to have an interest in the subject matter and in learning new things, as well as a capacity to speak directly to scientists about their work.

Illustrators like Simmonet usually start off work on an illustration by asking the scientist what’s the biggest takeaway a viewer should grasp when looking at a visual. Unfortunately, this latest discovery yielded a multitude of papers emphasizing different conclusions and highlights. With so many scientific angles, there’s a stark challenge in trying to cram every important thing into a single drawing.

Clearly, however, the illustrations needed to center around the kilonova. Simmonet loves colors, so she began by discussing with the researchers what kind of color scheme would work best. The smash of two neutron stars lends itself well to deep, vibrant hues. Simmonet and Robin Dienel at the Carnegie Institution for Science elected to use a wide array of colors and drew bright cracking to show pressure forming at the merging. Others, like Luis Calcada at the European Southern Observatory, limited the color scheme in favor of emphasizing the bright moment of collision and the signal waves created by the kilonova.

Animators have even more freedom to show the event, since they have much more than a single frame to play with. The Conceptual Image Lab at NASA’s [US National Aeronautics and Space Administration] Goddard Space Flight Center created a short video about the new findings, and lead animator Brian Monroe says the video he and his colleagues designed shows off the evolution of the entire process: the rising action, climax, and resolution of the kilonova event.

The illustrators try to adhere to what the likely physics of the event entailed, soliciting feedback from the scientists to make sure they’re getting it right. The swirling of gas, the direction of ejected matter upon impact, the reflection of light, the proportions of the objects—all of these things are deliberately framed such that they make scientific sense. …

Do take a look at Patel’s piece, if for no other reason than to see all of the images he has embedded there. You may recognize Aurore Simmonet’s name from the credit line in the second image I have embedded here.

(Merry Christmas!) Japanese tree frogs inspire hardware for the highest of tech: a swarmalator

First, the frog,

[Japanese Tree Frog] By 池田正樹 (talk)masaki ikeda – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=4593224

I wish they had a recording of the mating calls for Japanese tree frogs since they were the inspiration for mathematicians at Cornell University (New York state, US) according to a November 17, 2017 news item on ScienceDaily,

How does the Japanese tree frog figure into the latest work of noted mathematician Steven Strogatz? As it turns out, quite prominently.

“We had read about these funny frogs that hop around and croak,” said Strogatz, the Jacob Gould Schurman Professor of Applied Mathematics. “They form patterns in space and time. Usually it’s about reproduction. And based on how the other guy or guys are croaking, they don’t want to be around another one that’s croaking at the same time as they are, because they’ll jam each other.”

A November 15, 2017 Cornell University news release (also on EurekAlert but dated November 17, 2017) by Tom Fleischman, which originated the news item, details how the calls led to ‘swarmalators’ (Note: Links have been removed),

Strogatz and Kevin O’Keeffe, Ph.D. ’17, used the curious mating ritual of male Japanese tree frogs as inspiration for their exploration of “swarmalators” – their term for systems in which both synchronization and swarming occur together.

Specifically, they considered oscillators whose phase dynamics and spatial dynamics are coupled. In the instance of the male tree frogs, they attempt to croak in exact anti-phase (one croaks while the other is silent) while moving away from a rival so as to be heard by females.

This opens up “a new class of math problems,” said Strogatz, a Stephen H. Weiss Presidential Fellow. “The question is, what do we expect to see when people start building systems like this or observing them in biology?”

Their paper, “Oscillators That Sync and Swarm,” was published Nov. 13 [2017] in Nature Communications. Strogatz and O’Keeffe – now a postdoctoral researcher with the Senseable City Lab at the Massachusetts Institute of Technology – collaborated with Hyunsuk Hong from Chonbuk National University in Jeonju, South Korea.

Swarming and synchronization both involve large, self-organizing groups of individuals interacting according to simple rules, but rarely have they been studied together, O’Keeffe said.

“No one had connected these two areas, in spite of the fact that there were all these parallels,” he said. “That was the theoretical idea that sort of seduced us, I suppose. And there were also a couple of concrete examples, which we liked – including the tree frogs.”

Studies of swarms focus on how animals move – think of birds flocking or fish schooling – while neglecting the dynamics of their internal states. Studies of synchronization do the opposite: They focus on oscillators’ internal dynamics. Strogatz long has been fascinated by fireflies’ synchrony and other similar phenomena, giving a TED Talk on the topic in 2004, but not on their motion.

“[Swarming and synchronization] are so similar, and yet they were never connected together, and it seems so obvious,” O’Keeffe said. “It’s a whole new landscape of possible behaviors that hadn’t been explored before.”

Using a pair of governing equations that assume swarmalators are free to move about, along with numerical simulations, the group found that a swarmalator system settles into one of five states:

  • Static synchrony – featuring circular symmetry, crystal-like distribution, fully synchronized in phase;
  • Static asynchrony – featuring uniform distribution, meaning that every phase occurs everywhere;
  • Static phase wave – swarmalators settle near others in a phase similar to their own, and phases are frozen at their initial values;
  • Splintered phase wave – nonstationary, disconnected clusters of distinct phases; and
  • Active phase wave – similar to bidirectional states found in biological swarms, where populations split into counter-rotating subgroups; also similar to vortex arrays formed by groups of sperm.

Through the study of simple models, the group found that the coupling of “sync” and “swarm” leads to rich patterns in both time and space, and could lead to further study of systems that exhibit this dual behavior.

“This opens up a lot of questions for many parts of science – there are a lot of things to try that people hadn’t thought of trying,” Strogatz said. “It’s science that opens doors for science. It’s inaugurating science, rather than culminating science.”

Here’s a link to and a citation for the paper,

Oscillators that sync and swarm by Kevin P. O’Keeffe, Hyunsuk Hong, & Steven H. Strogatz. Nature Communications 8, Article number: 1504 (2017) doi:10.1038/s41467-017-01190-3 Published online: 15 November 2017

This paper is open access.

One last thing, these frogs have also inspired WiFi improvements (from the Japanese tree frog Wikipedia entry; Note: Links have been removed),

Journalist Toyohiro Akiyama carried some Japanese tree frogs with him during his trip to the Mir space station in December 1990.[citation needed] Calling behavior of the species was used to create an algorithm for optimizing Wi-Fi networks.[3]

While it’s not clear in the Wikipedia entry, the frogs were part of an experiment. Here’s a link to and a citation for the paper about the experiment, along with an abstract,

The Frog in Space (FRIS) experiment onboard Space Station Mir: final report and follow-on studies by Yamashita, M.; Izumi-Kurotani, A.; Mogami, Y.; Okuno,k M.; Naitoh, T.; Wassersug, R. J. Biol Sci Space. 1997 Dec 11(4):313-20.

Abstract

The “Frog in Space” (FRIS) experiment marked a major step for Japanese space life science, on the occasion of the first space flight of a Japanese cosmonaut. At the core of FRIS were six Japanese tree frogs, Hyla japonica, flown on Space Station Mir for 8 days in 1990. The behavior of these frogs was observed and recorded under microgravity. The frogs took up a “parachuting” posture when drifting in a free volume on Mir. When perched on surfaces, they typically sat with their heads bent backward. Such a peculiar posture, after long exposure to microgravity, is discussed in light of motion sickness in amphibians. Histological examinations and other studies were made on the specimens upon recovery. Some organs, such as the liver and the vertebra, showed changes as a result of space flight; others were unaffected. Studies that followed FRIS have been conducted to prepare for a second FRIS on the International Space Station. Interspecific diversity in the behavioral reactions of anurans to changes in acceleration is the major focus of these investigations. The ultimate goal of this research is to better understand how organisms have adapted to gravity through their evolution on earth.

The paper is open access.