Tag Archives: MIT

3-D integration of nanotechnologies on a single computer chip

By integrating nanomaterials , a new technique for a 3D computer chip capable of handling today’s huge amount of data has been developed. Weirdly, the first two paragraphs of a July 5, 2017 news item on Nanowerk do not convey the main point (Note: A link has been removed),

As embedded intelligence is finding its way into ever more areas of our lives, fields ranging from autonomous driving to personalized medicine are generating huge amounts of data. But just as the flood of data is reaching massive proportions, the ability of computer chips to process it into useful information is stalling.

Now, researchers at Stanford University and MIT have built a new chip to overcome this hurdle. The results are published today in the journal Nature (“Three-dimensional integration of nanotechnologies for computing and data storage on a single chip”), by lead author Max Shulaker, an assistant professor of electrical engineering and computer science at MIT. Shulaker began the work as a PhD student alongside H.-S. Philip Wong and his advisor Subhasish Mitra, professors of electrical engineering and computer science at Stanford. The team also included professors Roger Howe and Krishna Saraswat, also from Stanford.

This image helps to convey the main points,

Instead of relying on silicon-based devices, a new chip uses carbon nanotubes and resistive random-access memory (RRAM) cells. The two are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. Courtesy MIT

As I hove been quite impressed with their science writing, it was a bit surprising to find that the Massachusetts Institute of Technology (MIT) had issued this news release (news item) as it didn’t follow the ‘rules’, i.e., cover as many of the journalistic questions (Who, What, Where, When, Why, and, sometimes, How) as possible in the first sentence/paragraph. This is written more in the style of a magazine article and so the details take a while to emerge, from a July 5, 2017 MIT news release, which originated the news item,

Computers today comprise different chips cobbled together. There is a chip for computing and a separate chip for data storage, and the connections between the two are limited. As applications analyze increasingly massive volumes of data, the limited rate at which data can be moved between different chips is creating a critical communication “bottleneck.” And with limited real estate on the chip, there is not enough room to place them side-by-side, even as they have been miniaturized (a phenomenon known as Moore’s Law).

To make matters worse, the underlying devices, transistors made from silicon, are no longer improving at the historic rate that they have for decades.

The new prototype chip is a radical change from today’s chips. It uses multiple nanotechnologies, together with a new computer architecture, to reverse both of these trends.

Instead of relying on silicon-based devices, the chip uses carbon nanotubes, which are sheets of 2-D graphene formed into nanocylinders, and resistive random-access memory (RRAM) cells, a type of nonvolatile memory that operates by changing the resistance of a solid dielectric material. The researchers integrated over 1 million RRAM cells and 2 million carbon nanotube field-effect transistors, making the most complex nanoelectronic system ever made with emerging nanotechnologies.

The RRAM and carbon nanotubes are built vertically over one another, making a new, dense 3-D computer architecture with interleaving layers of logic and memory. By inserting ultradense wires between these layers, this 3-D architecture promises to address the communication bottleneck.

However, such an architecture is not possible with existing silicon-based technology, according to the paper’s lead author, Max Shulaker, who is a core member of MIT’s Microsystems Technology Laboratories. “Circuits today are 2-D, since building conventional silicon transistors involves extremely high temperatures of over 1,000 degrees Celsius,” says Shulaker. “If you then build a second layer of silicon circuits on top, that high temperature will damage the bottom layer of circuits.”

The key in this work is that carbon nanotube circuits and RRAM memory can be fabricated at much lower temperatures, below 200 C. “This means they can be built up in layers without harming the circuits beneath,” Shulaker says.

This provides several simultaneous benefits for future computing systems. “The devices are better: Logic made from carbon nanotubes can be an order of magnitude more energy-efficient compared to today’s logic made from silicon, and similarly, RRAM can be denser, faster, and more energy-efficient compared to DRAM,” Wong says, referring to a conventional memory known as dynamic random-access memory.

“In addition to improved devices, 3-D integration can address another key consideration in systems: the interconnects within and between chips,” Saraswat adds.

“The new 3-D computer architecture provides dense and fine-grained integration of computating and data storage, drastically overcoming the bottleneck from moving data between chips,” Mitra says. “As a result, the chip is able to store massive amounts of data and perform on-chip processing to transform a data deluge into useful information.”

To demonstrate the potential of the technology, the researchers took advantage of the ability of carbon nanotubes to also act as sensors. On the top layer of the chip they placed over 1 million carbon nanotube-based sensors, which they used to detect and classify ambient gases.

Due to the layering of sensing, data storage, and computing, the chip was able to measure each of the sensors in parallel, and then write directly into its memory, generating huge bandwidth, Shulaker says.

Three-dimensional integration is the most promising approach to continue the technology scaling path set forth by Moore’s laws, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

“It leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” Rabaey says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

“One big advantage of our demonstration is that it is compatible with today’s silicon infrastructure, both in terms of fabrication and design,” says Howe.

“The fact that this strategy is both CMOS [complementary metal-oxide-semiconductor] compatible and viable for a variety of applications suggests that it is a significant step in the continued advancement of Moore’s Law,” says Ken Hansen, president and CEO of the Semiconductor Research Corporation, which supported the research. “To sustain the promise of Moore’s Law economics, innovative heterogeneous approaches are required as dimensional scaling is no longer sufficient. This pioneering work embodies that philosophy.”

The team is working to improve the underlying nanotechnologies, while exploring the new 3-D computer architecture. For Shulaker, the next step is working with Massachusetts-based semiconductor company Analog Devices to develop new versions of the system that take advantage of its ability to carry out sensing and data processing on the same chip.

So, for example, the devices could be used to detect signs of disease by sensing particular compounds in a patient’s breath, says Shulaker.

“The technology could not only improve traditional computing, but it also opens up a whole new range of applications that we can target,” he says. “My students are now investigating how we can produce chips that do more than just computing.”

“This demonstration of the 3-D integration of sensors, memory, and logic is an exceptionally innovative development that leverages current CMOS technology with the new capabilities of carbon nanotube field–effect transistors,” says Sam Fuller, CTO emeritus of Analog Devices, who was not involved in the research. “This has the potential to be the platform for many revolutionary applications in the future.”

This work was funded by the Defense Advanced Research Projects Agency [DARPA], the National Science Foundation, Semiconductor Research Corporation, STARnet SONIC, and member companies of the Stanford SystemX Alliance.

Here’s a link to and a citation for the paper,

Three-dimensional integration of nanotechnologies for computing and data storage on a single chip by Max M. Shulaker, Gage Hills, Rebecca S. Park, Roger T. Howe, Krishna Saraswat, H.-S. Philip Wong, & Subhasish Mitra. Nature 547, 74–78 (06 July 2017) doi:10.1038/nature22994 Published online 05 July 2017

This paper is behind a paywall.

CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

After giving a basic explanation of the technology and some of the controversies in part 1 and offering more detail about the technology and about the possibility of designer babies in part 2; this part covers public discussion, a call for one and the suggestion that one is taking place in popular culture.

But a discussion does need to happen

In a move that is either an exquisite coincidence or has been carefully orchestrated (I vote for the latter), researchers from the University of Wisconsin-Madison have released a study about attitudes in the US to human genome editing. From an Aug. 11, 2017 University of Wisconsin-Madison news release (also on EurekAllert),

In early August 2017, an international team of scientists announced they had successfully edited the DNA of human embryos. As people process the political, moral and regulatory issues of the technology — which nudges us closer to nonfiction than science fiction — researchers at the University of Wisconsin-Madison and Temple University show the time is now to involve the American public in discussions about human genome editing.

In a study published Aug. 11 in the journal Science, the researchers assessed what people in the United States think about the uses of human genome editing and how their attitudes may drive public discussion. They found a public divided on its uses but united in the importance of moving conversations forward.

“There are several pathways we can go down with gene editing,” says UW-Madison’s Dietram Scheufele, lead author of the study and member of a National Academy of Sciences committee that compiled a report focused on human gene editing earlier this year. “Our study takes an exhaustive look at all of those possible pathways forward and asks where the public stands on each one of them.”

Compared to previous studies on public attitudes about the technology, the new study takes a more nuanced approach, examining public opinion about the use of gene editing for disease therapy versus for human enhancement, and about editing that becomes hereditary versus editing that does not.

The research team, which included Scheufele and Dominique Brossard — both professors of life sciences communication — along with Michael Xenos, professor of communication arts, first surveyed study participants about the use of editing to treat disease (therapy) versus for enhancement (creating so-called “designer babies”). While about two-thirds of respondents expressed at least some support for therapeutic editing, only one-third expressed support for using the technology for enhancement.

Diving even deeper, researchers looked into public attitudes about gene editing on specific cell types — somatic or germline — either for therapy or enhancement. Somatic cells are non-reproductive, so edits made in those cells do not affect future generations. Germline cells, however, are heritable, and changes made in these cells would be passed on to children.

Public support of therapeutic editing was high both in cells that would be inherited and those that would not, with 65 percent of respondents supporting therapy in germline cells and 64 percent supporting therapy in somatic cells. When considering enhancement editing, however, support depended more upon whether the changes would affect future generations. Only 26 percent of people surveyed supported enhancement editing in heritable germline cells and 39 percent supported enhancement of somatic cells that would not be passed on to children.

“A majority of people are saying that germline enhancement is where the technology crosses that invisible line and becomes unacceptable,” says Scheufele. “When it comes to therapy, the public is more open, and that may partly be reflective of how severe some of those genetically inherited diseases are. The potential treatments for those diseases are something the public at least is willing to consider.”

Beyond questions of support, researchers also wanted to understand what was driving public opinions. They found that two factors were related to respondents’ attitudes toward gene editing as well as their attitudes toward the public’s role in its emergence: the level of religious guidance in their lives, and factual knowledge about the technology.

Those with a high level of religious guidance in their daily lives had lower support for human genome editing than those with low religious guidance. Additionally, those with high knowledge of the technology were more supportive of it than those with less knowledge.

While respondents with high religious guidance and those with high knowledge differed on their support for the technology, both groups highly supported public engagement in its development and use. These results suggest broad agreement that the public should be involved in questions of political, regulatory and moral aspects of human genome editing.

“The public may be split along lines of religiosity or knowledge with regard to what they think about the technology and scientific community, but they are united in the idea that this is an issue that requires public involvement,” says Scheufele. “Our findings show very nicely that the public is ready for these discussions and that the time to have the discussions is now, before the science is fully ready and while we have time to carefully think through different options regarding how we want to move forward.”

Here’s a  link to and a citation for the paper,

U.S. attitudes on human genome editing by Dietram A. Scheufele, Michael A. Xenos, Emily L. Howell, Kathleen M. Rose, Dominique Brossard1, and Bruce W. Hardy. Science 11 Aug 2017: Vol. 357, Issue 6351, pp. 553-554 DOI: 10.1126/science.aan3708

This paper is behind a paywall.

A couple of final comments

Briefly, I notice that there’s no mention of the ethics of patenting this technology in the news release about the study.

Moving on, it seems surprising that the first team to engage in germline editing in the US is in Oregon; I would have expected the work to come from Massachusetts, California, or Illinois where a lot of bleeding edge medical research is performed. However, given the dearth of financial support from federal funding institutions, it seems likely that only an outsider would dare to engage i the research. Given the timing, Mitalipov’s work was already well underway before the recent about-face from the US National Academy of Sciences (Note: Kaiser’s Feb. 14, 2017 article does note that for some the recent recommendations do not represent any change).

As for discussion on issues such as editing of the germline, I’ve often noted here that popular culture (including advertising with the science fiction and other dramas laid in various media) often provides an informal forum for discussion. Joelle Renstrom in an Aug. 13, 2017 article for slate.com writes that Orphan Black (a BBC America series featuring clones) opened up a series of questions about science and ethics in the guise of a thriller about clones. She offers a précis of the first four seasons (Note: A link has been removed),

If you stopped watching a few seasons back, here’s a brief synopsis of how the mysteries wrap up. Neolution, an organization that seeks to control human evolution through genetic modification, began Project Leda, the cloning program, for two primary reasons: to see whether they could and to experiment with mutations that might allow people (i.e., themselves) to live longer. Neolution partnered with biotech companies such as Dyad, using its big pharma reach and deep pockets to harvest people’s genetic information and to conduct individual and germline (that is, genetic alterations passed down through generations) experiments, including infertility treatments that result in horrifying birth defects and body modification, such as tail-growing.

She then provides the article’s thesis (Note: Links have been removed),

Orphan Black demonstrates Carl Sagan’s warning of a time when “awesome technological powers are in the hands of a very few.” Neolutionists do whatever they want, pausing only to consider whether they’re missing an opportunity to exploit. Their hubris is straight out of Victor Frankenstein’s playbook. Frankenstein wonders whether he ought to first reanimate something “of simpler organisation” than a human, but starting small means waiting for glory. Orphan Black’s evil scientists embody this belief: if they’re going to play God, then they’ll control not just their own destinies, but the clones’ and, ultimately, all of humanity’s. Any sacrifices along the way are for the greater good—reasoning that culminates in Westmoreland’s eugenics fantasy to genetically sterilize 99 percent of the population he doesn’t enhance.

Orphan Black uses sci-fi tropes to explore real-world plausibility. Neolution shares similarities with transhumanism, the belief that humans should use science and technology to take control of their own evolution. While some transhumanists dabble in body modifications, such as microchip implants or night-vision eye drops, others seek to end suffering by curing human illness and aging. But even these goals can be seen as selfish, as access to disease-eradicating or life-extending technologies would be limited to the wealthy. Westmoreland’s goal to “sell Neolution to the 1 percent” seems frighteningly plausible—transhumanists, who statistically tend to be white, well-educated, and male, and their associated organizations raise and spend massive sums of money to help fulfill their goals. …

On Orphan Black, denial of choice is tantamount to imprisonment. That the clones have to earn autonomy underscores the need for ethics in science, especially when it comes to genetics. The show’s message here is timely given the rise of gene-editing techniques such as CRISPR. Recently, the National Academy of Sciences gave germline gene editing the green light, just one year after academy scientists from around the world argued it would be “irresponsible to proceed” without further exploring the implications. Scientists in the United Kingdom and China have already begun human genetic engineering and American scientists recently genetically engineered a human embryo for the first time. The possibility of Project Leda isn’t farfetched. Orphan Black warns us that money, power, and fear of death can corrupt both people and science. Once that happens, loss of humanity—of both the scientists and the subjects—is inevitable.

In Carl Sagan’s dark vision of the future, “people have lost the ability to set their own agendas or knowledgeably question those in authority.” This describes the plight of the clones at the outset of Orphan Black, but as the series continues, they challenge this paradigm by approaching science and scientists with skepticism, ingenuity, and grit. …

I hope there are discussions such as those Scheufele and Brossard are advocating but it might be worth considering that there is already some discussion underway, as informal as it is.

-30-

Part 1: CRISPR and editing the germline in the US (part 1 of 3): In the beginning

Part 2: CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Having included an explanation of CRISPR-CAS9 technology along with the news about the first US team to edit the germline and bits and pieces about ethics and a patent fight (part 1), this part hones in on the details of the work and worries about ‘designer babies’.

The interest flurry

I found three articles addressing the research and all three concur that despite some of the early reporting, this is not the beginning of a ‘designer baby’ generation.

First up was Nick Thieme in a July 28, 2017 article for Slate,

MIT Technology Review reported Thursday that a team of researchers from Portland, Oregon were the first team of U.S.-based scientists to successfully create a genetically modified human embryo. The researchers, led by Shoukhrat Mitalipov of Oregon Health and Science University, changed the DNA of—in MIT Technology Review’s words—“many tens” of genetically-diseased embryos by injecting the host egg with CRISPR, a DNA-based gene editing tool first discovered in bacteria, at the time of fertilization. CRISPR-Cas9, as the full editing system is called, allows scientists to change genes accurately and efficiently. As has happened with research elsewhere, the CRISPR-edited embryos weren’t implanted—they were kept sustained for only a couple of days.

In addition to being the first American team to complete this feat, the researchers also improved upon the work of the three Chinese research teams that beat them to editing embryos with CRISPR: Mitalipov’s team increased the proportion of embryonic cells that received the intended genetic changes, addressing an issue called “mosaicism,” which is when an embryo is comprised of cells with different genetic makeups. Increasing that proportion is essential to CRISPR work in eliminating inherited diseases, to ensure that the CRISPR therapy has the intended result. The Oregon team also reduced the number of genetic errors introduced by CRISPR, reducing the likelihood that a patient would develop cancer elsewhere in the body.

Separate from the scientific advancements, it’s a big deal that this work happened in a country with such intense politicization of embryo research. …

But there are a great number of obstacles between the current research and the future of genetically editing all children to be 12-foot-tall Einsteins.

Ed Yong in an Aug. 2, 2017 article for The Atlantic offered a comprehensive overview of the research and its implications (unusually for Yong, there seems to be mildly condescending note but it’s worth ignoring for the wealth of information in the article; Note: Links have been removed),

… the full details of the experiment, which are released today, show that the study is scientifically important but much less of a social inflection point than has been suggested. “This has been widely reported as the dawn of the era of the designer baby, making it probably the fifth or sixth time people have reported that dawn,” says Alta Charo, an expert on law and bioethics at the University of Wisconsin-Madison. “And it’s not.”

Given the persistent confusion around CRISPR and its implications, I’ve laid out exactly what the team did, and what it means.

Who did the experiments?

Shoukhrat Mitalipov is a Kazakhstani-born cell biologist with a history of breakthroughs—and controversy—in the stem cell field. He was the scientist to clone monkeys. He was the first to create human embryos by cloning adult cells—a move that could provide patients with an easy supply of personalized stem cells. He also pioneered a technique for creating embryos with genetic material from three biological parents, as a way of preventing a group of debilitating inherited diseases.

Although MIT Tech Review name-checked Mitalipov alone, the paper splits credit for the research between five collaborating teams—four based in the United States, and one in South Korea.

What did they actually do?

The project effectively began with an elevator conversation between Mitalipov and his colleague Sanjiv Kaul. Mitalipov explained that he wanted to use CRISPR to correct a disease-causing gene in human embryos, and was trying to figure out which disease to focus on. Kaul, a cardiologist, told him about hypertrophic cardiomyopathy (HCM)—an inherited heart disease that’s commonly caused by mutations in a gene called MYBPC3. HCM is surprisingly common, affecting 1 in 500 adults. Many of them lead normal lives, but in some, the walls of their hearts can thicken and suddenly fail. For that reason, HCM is the commonest cause of sudden death in athletes. “There really is no treatment,” says Kaul. “A number of drugs are being evaluated but they are all experimental,” and they merely treat the symptoms. The team wanted to prevent HCM entirely by removing the underlying mutation.

They collected sperm from a man with HCM and used CRISPR to change his mutant gene into its normal healthy version, while simultaneously using the sperm to fertilize eggs that had been donated by female volunteers. In this way, they created embryos that were completely free of the mutation. The procedure was effective, and avoided some of the critical problems that have plagued past attempts to use CRISPR in human embryos.

Wait, other human embryos have been edited before?

There have been three attempts in China. The first two—in 2015 and 2016—used non-viable embryos that could never have resulted in a live birth. The third—announced this March—was the first to use viable embryos that could theoretically have been implanted in a womb. All of these studies showed that CRISPR gene-editing, for all its hype, is still in its infancy.

The editing was imprecise. CRISPR is heralded for its precision, allowing scientists to edit particular genes of choice. But in practice, some of the Chinese researchers found worrying levels of off-target mutations, where CRISPR mistakenly cut other parts of the genome.

The editing was inefficient. The first Chinese team only managed to successfully edit a disease gene in 4 out of 86 embryos, and the second team fared even worse.

The editing was incomplete. Even in the successful cases, each embryo had a mix of modified and unmodified cells. This pattern, known as mosaicism, poses serious safety problems if gene-editing were ever to be used in practice. Doctors could end up implanting women with embryos that they thought were free of a disease-causing mutation, but were only partially free. The resulting person would still have many tissues and organs that carry those mutations, and might go on to develop symptoms.

What did the American team do differently?

The Chinese teams all used CRISPR to edit embryos at early stages of their development. By contrast, the Oregon researchers delivered the CRISPR components at the earliest possible point—minutes before fertilization. That neatly avoids the problem of mosaicism by ensuring that an embryo is edited from the very moment it is created. The team did this with 54 embryos and successfully edited the mutant MYBPC3 gene in 72 percent of them. In the other 28 percent, the editing didn’t work—a high failure rate, but far lower than in previous attempts. Better still, the team found no evidence of off-target mutations.

This is a big deal. Many scientists assumed that they’d have to do something more convoluted to avoid mosaicism. They’d have to collect a patient’s cells, which they’d revert into stem cells, which they’d use to make sperm or eggs, which they’d edit using CRISPR. “That’s a lot of extra steps, with more risks,” says Alta Charo. “If it’s possible to edit the embryo itself, that’s a real advance.” Perhaps for that reason, this is the first study to edit human embryos that was published in a top-tier scientific journal—Nature, which rejected some of the earlier Chinese papers.

Is this kind of research even legal?

Yes. In Western Europe, 15 countries out of 22 ban any attempts to change the human germ line—a term referring to sperm, eggs, and other cells that can transmit genetic information to future generations. No such stance exists in the United States but Congress has banned the Food and Drug Administration from considering research applications that make such modifications. Separately, federal agencies like the National Institutes of Health are banned from funding research that ultimately destroys human embryos. But the Oregon team used non-federal money from their institutions, and donations from several small non-profits. No taxpayer money went into their work. [emphasis mine]

Why would you want to edit embryos at all?

Partly to learn more about ourselves. By using CRISPR to manipulate the genes of embryos, scientists can learn more about the earliest stages of human development, and about problems like infertility and miscarriages. That’s why biologist Kathy Niakan from the Crick Institute in London recently secured a license from a British regulator to use CRISPR on human embryos.

Isn’t this a slippery slope toward making designer babies?

In terms of avoiding genetic diseases, it’s not conceptually different from PGD, which is already widely used. The bigger worry is that gene-editing could be used to make people stronger, smarter, or taller, paving the way for a new eugenics, and widening the already substantial gaps between the wealthy and poor. But many geneticists believe that such a future is fundamentally unlikely because complex traits like height and intelligence are the work of hundreds or thousands of genes, each of which have a tiny effect. The prospect of editing them all is implausible. And since genes are so thoroughly interconnected, it may be impossible to edit one particular trait without also affecting many others.

“There’s the worry that this could be used for enhancement, so society has to draw a line,” says Mitalipov. “But this is pretty complex technology and it wouldn’t be hard to regulate it.”

Does this discovery have any social importance at all?

“It’s not so much about designer babies as it is about geographical location,” says Charo. “It’s happening in the United States, and everything here around embryo research has high sensitivity.” She and others worry that the early report about the study, before the actual details were available for scrutiny, could lead to unnecessary panic. “Panic reactions often lead to panic-driven policy … which is usually bad policy,” wrote Greely [bioethicist Hank Greely].

As I understand it, despite the change in stance, there is no federal funding available for the research performed by Mitalipov and his team.

Finally, University College London (UCL) scientists Joyce Harper and Helen O’Neill wrote about CRISPR, the Oregon team’s work, and the possibilities in an Aug. 3, 2017 essay for The Conversation (Note: Links have been removed),

The genome editing tool used, CRISPR-Cas9, has transformed the field of biology in the short time since its discovery in that it not only promises, but delivers. CRISPR has surpassed all previous efforts to engineer cells and alter genomes at a fraction of the time and cost.

The technology, which works like molecular scissors to cut and paste DNA, is a natural defence system that bacteria use to fend off harmful infections. This system has the ability to recognise invading virus DNA, cut it and integrate this cut sequence into its own genome – allowing the bacterium to render itself immune to future infections of viruses with similar DNA. It is this ability to recognise and cut DNA that has allowed scientists to use it to target and edit specific DNA regions.

When this technology is applied to “germ cells” – the sperm and eggs – or embryos, it changes the germline. That means that any alterations made would be permanent and passed down to future generations. This makes it more ethically complex, but there are strict regulations around human germline genome editing, which is predominantly illegal. The UK received a licence in 2016 to carry out CRISPR on human embryos for research into early development. But edited embryos are not allowed to be inserted into the uterus and develop into a fetus in any country.

Germline genome editing came into the global spotlight when Chinese scientists announced in 2015 that they had used CRISPR to edit non-viable human embryos – cells that could never result in a live birth. They did this to modify the gene responsible for the blood disorder β-thalassaemia. While it was met with some success, it received a lot of criticism because of the premature use of this technology in human embryos. The results showed a high number of potentially dangerous, off-target mutations created in the procedure.

Impressive results

The new study, published in Nature, is different because it deals with viable human embryos and shows that the genome editing can be carried out safely – without creating harmful mutations. The team used CRISPR to correct a mutation in the gene MYBPC3, which accounts for approximately 40% of the myocardial disease hypertrophic cardiomyopathy. This is a dominant disease, so an affected individual only needs one abnormal copy of the gene to be affected.

The researchers used sperm from a patient carrying one copy of the MYBPC3 mutation to create 54 embryos. They edited them using CRISPR-Cas9 to correct the mutation. Without genome editing, approximately 50% of the embryos would carry the patients’ normal gene and 50% would carry his abnormal gene.

After genome editing, the aim would be for 100% of embryos to be normal. In the first round of the experiments, they found that 66.7% of embryos – 36 out of 54 – were normal after being injected with CRIPSR. Of the remaining 18 embryos, five had remained unchanged, suggesting editing had not worked. In 13 embryos, only a portion of cells had been edited.

The level of efficiency is affected by the type of CRISPR machinery used and, critically, the timing in which it is put into the embryo. The researchers therefore also tried injecting the sperm and the CRISPR-Cas9 complex into the egg at the same time, which resulted in more promising results. This was done for 75 mature donated human eggs using a common IVF technique called intracytoplasmic sperm injection. This time, impressively, 72.4% of embryos were normal as a result. The approach also lowered the number of embryos containing a mixture of edited and unedited cells (these embryos are called mosaics).

Finally, the team injected a further 22 embryos which were grown into blastocyst – a later stage of embryo development. These were sequenced and the researchers found that the editing had indeed worked. Importantly, they could show that the level of off-target mutations was low.

A brave new world?

So does this mean we finally have a cure for debilitating, heritable diseases? It’s important to remember that the study did not achieve a 100% success rate. Even the researchers themselves stress that further research is needed in order to fully understand the potential and limitations of the technique.

In our view, it is unlikely that genome editing would be used to treat the majority of inherited conditions anytime soon. We still can’t be sure how a child with a genetically altered genome will develop over a lifetime, so it seems unlikely that couples carrying a genetic disease would embark on gene editing rather than undergoing already available tests – such as preimplantation genetic diagnosis or prenatal diagnosis – where the embryos or fetus are tested for genetic faults.

-30-

As might be expected there is now a call for public discussion about the ethics about this kind of work. See Part 3.

For anyone who started in the middle of this series, here’s Part 1 featuring an introduction to the technology and some of the issues.

CRISPR and editing the germline in the US (part 1 of 3): In the beginning

There’s been a minor flurry of interest in CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats; also known as CRISPR-CAS9), a gene-editing technique, since a team in Oregon announced a paper describing their work editing the germline. Since I’ve been following the CRISPR-CAS9 story for a while this seems like a good juncture for a more in-depth look at the topic. In this first part I’m including an introduction to CRISPR, some information about the latest US work, and some previous writing about ethics issues raised when Chinese scientists first announced their work editing germlines in 2015 and during the patent dispute between the University of California at Berkeley and Harvard University’s Broad Institute.

Introduction to CRISPR

I’ve been searching for a good description of CRISPR and this helped to clear up some questions for me (Thank you to MIT Review),

For anyone who’s been reading about science for a while, this upbeat approach to explaining how a particular technology will solve all sorts of problems will seem quite familiar. It’s not the most hyperbolic piece I’ve seen but it barely mentions any problems associated with research (for some of the problems see: ‘The interest flurry’ later in part 2).

Oregon team

Steve Connor’s July 26, 2017 article for the MIT (Massachusetts Institute of Technology) Technology Review breaks the news (Note: Links have been removed),

The first known attempt at creating genetically modified human embryos in the United States has been carried out by a team of researchers in Portland, Oregon, MIT Technology Review has learned.

The effort, led by Shoukhrat Mitalipov of Oregon Health and Science University, involved changing the DNA of a large number of one-cell embryos with the gene-editing technique CRISPR, according to people familiar with the scientific results.

Until now, American scientists have watched with a combination of awe, envy, and some alarm as scientists elsewhere were first to explore the controversial practice. To date, three previous reports of editing human embryos were all published by scientists in China.

Now Mitalipov is believed to have broken new ground both in the number of embryos experimented upon and by demonstrating that it is possible to safely and efficiently correct defective genes that cause inherited diseases.

Although none of the embryos were allowed to develop for more than a few days—and there was never any intention of implanting them into a womb—the experiments are a milestone on what may prove to be an inevitable journey toward the birth of the first genetically modified humans.

In altering the DNA code of human embryos, the objective of scientists is to show that they can eradicate or correct genes that cause inherited disease, like the blood condition beta-thalassemia. The process is termed “germline engineering” because any genetically modified child would then pass the changes on to subsequent generations via their own germ cells—the egg and sperm.

Some critics say germline experiments could open the floodgates to a brave new world of “designer babies” engineered with genetic enhancements—a prospect bitterly opposed by a range of religious organizations, civil society groups, and biotech companies.

The U.S. intelligence community last year called CRISPR a potential “weapon of mass destruction.”

Here’s a link to a citation for the groundbreaking paper,

Correction of a pathogenic gene mutation in human embryos by Hong Ma, Nuria Marti-Gutierrez, Sang-Wook Park, Jun Wu, Yeonmi Lee, Keiichiro Suzuki, Amy Koski, Dongmei Ji, Tomonari Hayama, Riffat Ahmed, Hayley Darby, Crystal Van Dyken, Ying Li, Eunju Kang, A.-Reum Park, Daesik Kim, Sang-Tae Kim, Jianhui Gong, Ying Gu, Xun Xu, David Battaglia, Sacha A. Krieg, David M. Lee, Diana H. Wu, Don P. Wolf, Stephen B. Heitner, Juan Carlos Izpisua Belmonte, Paula Amato, Jin-Soo Kim, Sanjiv Kaul, & Shoukhrat Mitalipov. Nature (2017) doi:10.1038/nature23305 Published online 02 August 2017

This paper appears to be open access.

CRISPR Issues: ethics and patents

In my May 14, 2015 posting I mentioned a ‘moratorium’ on germline research, the Chinese research paper, and the stance taken by the US National Institutes of Health (NIH),

The CRISPR technology has reignited a discussion about ethical and moral issues of human genetic engineering some of which is reviewed in an April 7, 2015 posting about a moratorium by Sheila Jasanoff, J. Benjamin Hurlbut and Krishanu Saha for the Guardian science blogs (Note: A link has been removed),

On April 3, 2015, a group of prominent biologists and ethicists writing in Science called for a moratorium on germline gene engineering; modifications to the human genome that will be passed on to future generations. The moratorium would apply to a technology called CRISPR/Cas9, which enables the removal of undesirable genes, insertion of desirable ones, and the broad recoding of nearly any DNA sequence.

Such modifications could affect every cell in an adult human being, including germ cells, and therefore be passed down through the generations. Many organisms across the range of biological complexity have already been edited in this way to generate designer bacteria, plants and primates. There is little reason to believe the same could not be done with human eggs, sperm and embryos. Now that the technology to engineer human germlines is here, the advocates for a moratorium declared, it is time to chart a prudent path forward. They recommend four actions: a hold on clinical applications; creation of expert forums; transparent research; and a globally representative group to recommend policy approaches.

The authors go on to review precedents and reasons for the moratorium while suggesting we need better ways for citizens to engage with and debate these issues,

An effective moratorium must be grounded in the principle that the power to modify the human genome demands serious engagement not only from scientists and ethicists but from all citizens. We need a more complex architecture for public deliberation, built on the recognition that we, as citizens, have a duty to participate in shaping our biotechnological futures, just as governments have a duty to empower us to participate in that process. Decisions such as whether or not to edit human genes should not be left to elite and invisible experts, whether in universities, ad hoc commissions, or parliamentary advisory committees. Nor should public deliberation be temporally limited by the span of a moratorium or narrowed to topics that experts deem reasonable to debate.

I recommend reading the post in its entirety as there are nuances that are best appreciated in the entirety of the piece.

Shortly after this essay was published, Chinese scientists announced they had genetically modified (nonviable) human embryos. From an April 22, 2015 article by David Cyranoski and Sara Reardon in Nature where the research and some of the ethical issues discussed,

In a world first, Chinese scientists have reported editing the genomes of human embryos. The results are published1 in the online journal Protein & Cell and confirm widespread rumours that such experiments had been conducted — rumours that sparked a high-profile debate last month2, 3 about the ethical implications of such work.

In the paper, researchers led by Junjiu Huang, a gene-function researcher at Sun Yat-sen University in Guangzhou, tried to head off such concerns by using ‘non-viable’ embryos, which cannot result in a live birth, that were obtained from local fertility clinics. The team attempted to modify the gene responsible for β-thalassaemia, a potentially fatal blood disorder, using a gene-editing technique known as CRISPR/Cas9. The researchers say that their results reveal serious obstacles to using the method in medical applications.

“I believe this is the first report of CRISPR/Cas9 applied to human pre-implantation embryos and as such the study is a landmark, as well as a cautionary tale,” says George Daley, a stem-cell biologist at Harvard Medical School in Boston, Massachusetts. “Their study should be a stern warning to any practitioner who thinks the technology is ready for testing to eradicate disease genes.”

….

Huang says that the paper was rejected by Nature and Science, in part because of ethical objections; both journals declined to comment on the claim. (Nature’s news team is editorially independent of its research editorial team.)

He adds that critics of the paper have noted that the low efficiencies and high number of off-target mutations could be specific to the abnormal embryos used in the study. Huang acknowledges the critique, but because there are no examples of gene editing in normal embryos he says that there is no way to know if the technique operates differently in them.

Still, he maintains that the embryos allow for a more meaningful model — and one closer to a normal human embryo — than an animal model or one using adult human cells. “We wanted to show our data to the world so people know what really happened with this model, rather than just talking about what would happen without data,” he says.

This, too, is a good and thoughtful read.

There was an official response in the US to the publication of this research, from an April 29, 2015 post by David Bruggeman on his Pasco Phronesis blog (Note: Links have been removed),

In light of Chinese researchers reporting their efforts to edit the genes of ‘non-viable’ human embryos, the National Institutes of Health (NIH) Director Francis Collins issued a statement (H/T Carl Zimmer).

“NIH will not fund any use of gene-editing technologies in human embryos. The concept of altering the human germline in embryos for clinical purposes has been debated over many years from many different perspectives, and has been viewed almost universally as a line that should not be crossed. Advances in technology have given us an elegant new way of carrying out genome editing, but the strong arguments against engaging in this activity remain. These include the serious and unquantifiable safety issues, ethical issues presented by altering the germline in a way that affects the next generation without their consent, and a current lack of compelling medical applications justifying the use of CRISPR/Cas9 in embryos.” …

The US has modified its stance according to a February 14, 2017 article by Jocelyn Kaiser for Science Magazine (Note: Links have been removed),

Editing the DNA of a human embryo to prevent a disease in a baby could be ethically allowable one day—but only in rare circumstances and with safeguards in place, says a widely anticipated report released today.

The report from an international committee convened by the U.S. National Academy of Sciences (NAS) and the National Academy of Medicine in Washington, D.C., concludes that such a clinical trial “might be permitted, but only following much more research” on risks and benefits, and “only for compelling reasons and under strict oversight.” Those situations could be limited to couples who both have a serious genetic disease and for whom embryo editing is “really the last reasonable option” if they want to have a healthy biological child, says committee co-chair Alta Charo, a bioethicist at the University of Wisconsin in Madison.

Some researchers are pleased with the report, saying it is consistent with previous conclusions that safely altering the DNA of human eggs, sperm, or early embryos—known as germline editing—to create a baby could be possible eventually. “They have closed the door to the vast majority of germline applications and left it open for a very small, well-defined subset. That’s not unreasonable in my opinion,” says genome researcher Eric Lander of the Broad Institute in Cambridge, Massachusetts. Lander was among the organizers of an international summit at NAS in December 2015 who called for more discussion before proceeding with embryo editing.

But others see the report as lowering the bar for such experiments because it does not explicitly say they should be prohibited for now. “It changes the tone to an affirmative position in the absence of the broad public debate this report calls for,” says Edward Lanphier, chairman of the DNA editing company Sangamo Therapeutics in Richmond, California. Two years ago, he co-authored a Nature commentary calling for a moratorium on clinical embryo editing.

One advocacy group opposed to embryo editing goes further. “We’re very disappointed with the report. It’s really a pretty dramatic shift from the existing and widespread agreement globally that human germline editing should be prohibited,” says Marcy Darnovsky, executive director of the Center for Genetics and Society in Berkeley, California.

Interestingly, this change of stance occurred just prior to a CRISPR patent decision (from my March 15, 2017 posting),

I have written about the CRISPR patent tussle (Harvard & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley) previously in a Jan. 6, 2015 posting and in a more detailed May 14, 2015 posting. I also mentioned (in a Jan. 17, 2017 posting) CRISPR and its patent issues in the context of a posting about a Slate.com series on Frankenstein and the novel’s applicability to our own time. This patent fight is being bitterly fought as fortunes are at stake.

It seems a decision has been made regarding the CRISPR patent claims. From a Feb. 17, 2017 article by Charmaine Distor for The Science Times,

After an intense court battle, the US Patent and Trademark Office (USPTO) released its ruling on February 15 [2017]. The rights for the CRISPR-Cas9 gene editing technology was handed over to the Broad Institute of Harvard University and the Massachusetts Institute of Technology (MIT).

According to an article in Nature, the said court battle was between the Broad Institute and the University of California. The two institutions are fighting over the intellectual property right for the CRISPR patent. The case between the two started when the patent was first awarded to the Broad Institute despite having the University of California apply first for the CRISPR patent.

Heidi Ledford’s Feb. 17, 2017 article for Nature provides more insight into the situation (Note: Links have been removed),

It [USPTO] ruled that the Broad Institute of Harvard and MIT in Cambridge could keep its patents on using CRISPR–Cas9 in eukaryotic cells. That was a blow to the University of California in Berkeley, which had filed its own patents and had hoped to have the Broad’s thrown out.

The fight goes back to 2012, when Jennifer Doudna at Berkeley, Emmanuelle Charpentier, then at the University of Vienna, and their colleagues outlined how CRISPR–Cas9 could be used to precisely cut isolated DNA1. In 2013, Feng Zhang at the Broad and his colleagues — and other teams — showed2 how it could be adapted to edit DNA in eukaryotic cells such as plants, livestock and humans.

Berkeley filed for a patent earlier, but the USPTO granted the Broad’s patents first — and this week upheld them. There are high stakes involved in the ruling. The holder of key patents could make millions of dollars from CRISPR–Cas9’s applications in industry: already, the technique has sped up genetic research, and scientists are using it to develop disease-resistant livestock and treatments for human diseases.

….

I also noted this eyebrow-lifting statistic,  “As for Ledford’s 3rd point, there are an estimated 763 patent families (groups of related patents) claiming CAS9 leading to the distinct possibility that the Broad Institute will be fighting many patent claims in the future.)

-30-

Part 2 covers three critical responses to the reporting and between them describe the technology in more detail and the possibility of ‘designer babies’.  CRISPR and editing the germline in the US (part 2 of 3): ‘designer babies’?

Part 3 is all about public discussion or, rather, the lack of and need for according to a couple of social scientists. Informally, there is some discussion via pop culture and Joelle Renstrom notes although she is focused on the larger issues touched on by the television series, Orphan Black and as I touch on in my final comments. CRISPR and editing the germline in the US (part 3 of 3): public discussions and pop culture

IBM and a 5 nanometre chip

If this continues, they’re going to have change the scale from nano to pico. IBM has announced work on a 5 nanometre (5nm) chip in a June 5, 2017 news item on Nanotechnology Now,

IBM (NYSE: IBM), its Research Alliance partners GLOBALFOUNDRIES and Samsung, and equipment suppliers have developed an industry-first process to build silicon nanosheet transistors that will enable 5 nanometer (nm) chips. The details of the process will be presented at the 2017 Symposia on VLSI Technology and Circuits conference in Kyoto, Japan. In less than two years since developing a 7nm test node chip with 20 billion transistors, scientists have paved the way for 30 billion switches on a fingernail-sized chip.

A June 5, 2017 IBM news release, which originated the news item, spells out some of the details about IBM’s latest breakthrough,

The resulting increase in performance will help accelerate cognitive computing [emphasis mine], the Internet of Things (IoT), and other data-intensive applications delivered in the cloud. The power savings could also mean that the batteries in smartphones and other mobile products could last two to three times longer than today’s devices, before needing to be charged.

Scientists working as part of the IBM-led Research Alliance at the SUNY Polytechnic Institute Colleges of Nanoscale Science and Engineering’s NanoTech Complex in Albany, NY achieved the breakthrough by using stacks of silicon nanosheets as the device structure of the transistor, instead of the standard FinFET architecture, which is the blueprint for the semiconductor industry up through 7nm node technology.

“For business and society to meet the demands of cognitive and cloud computing in the coming years, advancement in semiconductor technology is essential,” said Arvind Krishna, senior vice president, Hybrid Cloud, and director, IBM Research. “That’s why IBM aggressively pursues new and different architectures and materials that push the limits of this industry, and brings them to market in technologies like mainframes and our cognitive systems.”

The silicon nanosheet transistor demonstration, as detailed in the Research Alliance paper Stacked Nanosheet Gate-All-Around Transistor to Enable Scaling Beyond FinFET, and published by VLSI, proves that 5nm chips are possible, more powerful, and not too far off in the future.

Compared to the leading edge 10nm technology available in the market, a nanosheet-based 5nm technology can deliver 40 percent performance enhancement at fixed power, or 75 percent power savings at matched performance. This improvement enables a significant boost to meeting the future demands of artificial intelligence (AI) systems, virtual reality and mobile devices.

Building a New Switch

“This announcement is the latest example of the world-class research that continues to emerge from our groundbreaking public-private partnership in New York,” said Gary Patton, CTO and Head of Worldwide R&D at GLOBALFOUNDRIES. “As we make progress toward commercializing 7nm in 2018 at our Fab 8 manufacturing facility, we are actively pursuing next-generation technologies at 5nm and beyond to maintain technology leadership and enable our customers to produce a smaller, faster, and more cost efficient generation of semiconductors.”

IBM Research has explored nanosheet semiconductor technology for more than 10 years. This work is the first in the industry to demonstrate the feasibility to design and fabricate stacked nanosheet devices with electrical properties superior to FinFET architecture.

This same Extreme Ultraviolet (EUV) lithography approach used to produce the 7nm test node and its 20 billion transistors was applied to the nanosheet transistor architecture. Using EUV lithography, the width of the nanosheets can be adjusted continuously, all within a single manufacturing process or chip design. This adjustability permits the fine-tuning of performance and power for specific circuits – something not possible with today’s FinFET transistor architecture production, which is limited by its current-carrying fin height. Therefore, while FinFET chips can scale to 5nm, simply reducing the amount of space between fins does not provide increased current flow for additional performance.

“Today’s announcement continues the public-private model collaboration with IBM that is energizing SUNY-Polytechnic’s, Albany’s, and New York State’s leadership and innovation in developing next generation technologies,” said Dr. Bahgat Sammakia, Interim President, SUNY Polytechnic Institute. “We believe that enabling the first 5nm transistor is a significant milestone for the entire semiconductor industry as we continue to push beyond the limitations of our current capabilities. SUNY Poly’s partnership with IBM and Empire State Development is a perfect example of how Industry, Government and Academia can successfully collaborate and have a broad and positive impact on society.”

Part of IBM’s $3 billion, five-year investment in chip R&D (announced in 2014), the proof of nanosheet architecture scaling to a 5nm node continues IBM’s legacy of historic contributions to silicon and semiconductor innovation. They include the invention or first implementation of the single cell DRAM, the Dennard Scaling Laws, chemically amplified photoresists, copper interconnect wiring, Silicon on Insulator, strained engineering, multi core microprocessors, immersion lithography, high speed SiGe, High-k gate dielectrics, embedded DRAM, 3D chip stacking and Air gap insulators.

I last wrote about IBM and computer chips in a July 15, 2015 posting regarding their 7nm chip. You may want to scroll down approximately 55% of the way where I note research from MIT (Massachusetts Institute of Technology) about metal nanoparticles with unexpected properties possibly having an impact on nanoelectronics.

Getting back to IBM, they have produced a slick video about their 5nm chip breakthrough,

Meanwhile, Katherine Bourzac provides technical detail in a June 5, 2017 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), Note: A link has been removed,

Researchers at IBM believe the future of the transistor is in stacked nanosheets. …

Today’s state-of-the-art transistor is the finFET, named for the fin-like ridges of current-carrying silicon that project from the chip’s surface. The silicon fins are surrounded on their three exposed sides by a structure called the gate. The gate switches the flow of current on, and prevents electrons from leaking out when the transistor is off. This design is expected to last from this year’s bleeding-edge process technology, the “10-nanometer” node, through the next node, 7 nanometers. But any smaller, and these transistors will become difficult to switch off: electrons will leak out, even with the three-sided gates.

So the semiconductor industry has been working on alternatives for the upcoming 5 nanometer node. One popular idea is to use lateral silicon nanowires that are completely surrounded by the gate, preventing electron leaks and saving power. This design is called “gate all around.” IBM’s new design is a variation on this. In their test chips, each transistor is made up of three stacked horizontal sheets of silicon, each only a few nanometers thick and completely surrounded by a gate.

Why a sheet instead of a wire? Huiming Bu, director of silicon integration and devices at IBM, says nanosheets can bring back one of the benefits of pre-finFET, planar designs. Designers used to be able to vary the width of a transistor to prioritize fast operations or energy efficiency. Varying the amount of silicon in a finFET transistor is not practicable because it would mean making some fins taller and other shorter. Fins must all be the same height due to manufacturing constraints, says Bu.

IBM’s nanosheets can range from 8 to 50 nanometers in width. “Wider gives you better performance but takes more power, smaller width relaxes performance but reduces power use,” says Bu. This will allow circuit designers to pick and choose what they need, whether they are making a power efficient mobile chip processor or designing a bank of SRAM memory. “We are bringing flexibility back to the designers,” he says.

The test chips have 30 billion transistors. …

It was a struggle trying to edit Bourzac’s posting with its good detail and clear writing. I encourage you to read it (June 5, 2017 posting) in its entirety.

As for where this drive downwards to the ‘ever smaller’ is going, there’s Dexter’s Johnson’s June 29, 2017 posting about another IBM team’s research on his Nanoclast blog on the IEEE website (Note: Links have been removed),

There have been increasing signs coming from the research community that carbon nanotubes are beginning to step up to the challenge of offering a real alternative to silicon-based complementary metal-oxide semiconductor (CMOS) transistors.

Now, researchers at IBM Thomas J. Watson Research Center have advanced carbon nanotube-based transistors another step toward meeting the demands of the International Technology Roadmap for Semiconductors (ITRS) for the next decade. The IBM researchers have fabricated a p-channel transistor based on carbon nanotubes that takes up less than half the space of leading silicon technologies while operating at a lower voltage.

In research described in the journal Science, the IBM scientists used a carbon nanotube p-channel to reduce the transistor footprint; their transistor contains all components to 40 square nanometers [emphasis mine], an ITRS roadmap benchmark for ten years out.

One of the keys to being able to reduce the transistor to such a small size is the use of the carbon nanotube as the channel in place of silicon. The nanotube is only 1 nanometer thick. Such thinness offers a significant advantage in electrostatics, so that it’s possible to reduce the device gate length to 10 nanometers without seeing the device performance adversely affected by short-channel effects. An additional benefit of the nanotubes is that the electrons travel much faster, which contributes to a higher level of device performance.

Happy reading!

Robots and a new perspective on disability

I’ve long wondered about how disabilities would be viewed in a future (h/t May 4, 2017 news item on phys.org) where technology could render them largely irrelevant. A May 4, 2017 essay by Thusha (Gnanthusharan) Rajendran of Heriot-Watt University on TheConversation.com provides a perspective on the possibilities (Note: Links have been removed),

When dealing with the otherness of disability, the Victorians in their shame built huge out-of-sight asylums, and their legacy of “them” and “us” continues to this day. Two hundred years later, technologies offer us an alternative view. The digital age is shattering barriers, and what used to the norm is now being challenged.

What if we could change the environment, rather than the person? What if a virtual assistant could help a visually impaired person with their online shopping? And what if a robot “buddy” could help a person with autism navigate the nuances of workplace politics? These are just some of the questions that are being asked and which need answers as the digital age challenges our perceptions of normality.

The treatment of people with developmental conditions has a chequered history. In towns and cities across Britain, you will still see large Victorian buildings that were once places to “look after” people with disabilities, that is, remove them from society. Things became worse still during the time of the Nazis with an idealisation of the perfect and rejection of Darwin’s idea of natural diversity.

Today we face similar challenges about differences versus abnormalities. Arguably, current diagnostic systems do not help, because they diagnose the person and not “the system”. So, a child has challenging behaviour, rather than being in distress; the person with autism has a communication disorder rather than simply not being understood.

Natural-born cyborgs

In contrast, the digital world is all about systems. The field of human-computer interaction is about how things work between humans and computers or robots. Philosopher Andy Clark argues that humans have always been natural-born cyborgs – that is, we have always used technology (in its broadest sense) to improve ourselves.

The most obvious example is language itself. In the digital age we can become truly digitally enhanced. How many of us Google something rather than remembering it? How do you feel when you have no access to wi-fi? How much do we favour texting, tweeting and Facebook over face-to-face conversations? How much do we love and need our smartphones?

In the new field of social robotics, my colleagues and I are developing a robot buddy to help adults with autism to understand, for example, if their boss is pleased or displeased with their work. For many adults with autism, it is not the work itself that stops from them from having successful careers, it is the social environment surrounding work. From the stress-inducing interview to workplace politics, the modern world of work is a social minefield. It is not easy, at times, for us neurotypticals, but for a person with autism it is a world full contradictions and implied meaning.

Rajendra goes on to highlight efforts with autistic individuals; he also includes this video of his December 14, 2016 TEDx Heriot-Watt University talk, which largely focuses on his work with robots and autism  (Note: This runs approximately 15 mins.),

The talk reminded me of a Feb. 6, 2017 posting (scroll down about 33% of the way) where I discussed a recent book about science communication and its failure to recognize the importance of pop culture in that endeavour. As an example, I used a then recent announcement from MIT (Massachusetts Institute of Technology) about their emotion detection wireless application and the almost simultaneous appearance of that application in a Feb. 2, 2017 episode of The Big Bang Theory (a popular US television comedy) featuring a character who could be seen as autistic making use of the emotion detection device.

In any event, the work described in the MIT news release is very similar to Rajendra’s albeit the communication is delivered to the public through entirely different channels: TEDx Talk and TheConversation.com (channels aimed at academics and those with academic interests) and a pop culture television comedy with broad appeal.

An explanation of neural networks from the Massachusetts Institute of Technology (MIT)

I always enjoy the MIT ‘explainers’ and have been a little sad that I haven’t stumbled across one in a while. Until now, that is. Here’s an April 14, 201 neural network ‘explainer’ (in its entirety) by Larry Hardesty (?),

In the past 10 years, the best-performing artificial-intelligence systems — such as the speech recognizers on smartphones or Google’s latest automatic translator — have resulted from a technique called “deep learning.”

Deep learning is in fact a new name for an approach to artificial intelligence called neural networks, which have been going in and out of fashion for more than 70 years. Neural networks were first proposed in 1944 by Warren McCullough and Walter Pitts, two University of Chicago researchers who moved to MIT in 1952 as founding members of what’s sometimes called the first cognitive science department.

Neural nets were a major area of research in both neuroscience and computer science until 1969, when, according to computer science lore, they were killed off by the MIT mathematicians Marvin Minsky and Seymour Papert, who a year later would become co-directors of the new MIT Artificial Intelligence Laboratory.

The technique then enjoyed a resurgence in the 1980s, fell into eclipse again in the first decade of the new century, and has returned like gangbusters in the second, fueled largely by the increased processing power of graphics chips.

“There’s this idea that ideas in science are a bit like epidemics of viruses,” says Tomaso Poggio, the Eugene McDermott Professor of Brain and Cognitive Sciences at MIT, an investigator at MIT’s McGovern Institute for Brain Research, and director of MIT’s Center for Brains, Minds, and Machines. “There are apparently five or six basic strains of flu viruses, and apparently each one comes back with a period of around 25 years. People get infected, and they develop an immune response, and so they don’t get infected for the next 25 years. And then there is a new generation that is ready to be infected by the same strain of virus. In science, people fall in love with an idea, get excited about it, hammer it to death, and then get immunized — they get tired of it. So ideas should have the same kind of periodicity!”

Weighty matters

Neural nets are a means of doing machine learning, in which a computer learns to perform some task by analyzing training examples. Usually, the examples have been hand-labeled in advance. An object recognition system, for instance, might be fed thousands of labeled images of cars, houses, coffee cups, and so on, and it would find visual patterns in the images that consistently correlate with particular labels.

Modeled loosely on the human brain, a neural net consists of thousands or even millions of simple processing nodes that are densely interconnected. Most of today’s neural nets are organized into layers of nodes, and they’re “feed-forward,” meaning that data moves through them in only one direction. An individual node might be connected to several nodes in the layer beneath it, from which it receives data, and several nodes in the layer above it, to which it sends data.

To each of its incoming connections, a node will assign a number known as a “weight.” When the network is active, the node receives a different data item — a different number — over each of its connections and multiplies it by the associated weight. It then adds the resulting products together, yielding a single number. If that number is below a threshold value, the node passes no data to the next layer. If the number exceeds the threshold value, the node “fires,” which in today’s neural nets generally means sending the number — the sum of the weighted inputs — along all its outgoing connections.

When a neural net is being trained, all of its weights and thresholds are initially set to random values. Training data is fed to the bottom layer — the input layer — and it passes through the succeeding layers, getting multiplied and added together in complex ways, until it finally arrives, radically transformed, at the output layer. During training, the weights and thresholds are continually adjusted until training data with the same labels consistently yield similar outputs.

Minds and machines

The neural nets described by McCullough and Pitts in 1944 had thresholds and weights, but they weren’t arranged into layers, and the researchers didn’t specify any training mechanism. What McCullough and Pitts showed was that a neural net could, in principle, compute any function that a digital computer could. The result was more neuroscience than computer science: The point was to suggest that the human brain could be thought of as a computing device.

Neural nets continue to be a valuable tool for neuroscientific research. For instance, particular network layouts or rules for adjusting weights and thresholds have reproduced observed features of human neuroanatomy and cognition, an indication that they capture something about how the brain processes information.

The first trainable neural network, the Perceptron, was demonstrated by the Cornell University psychologist Frank Rosenblatt in 1957. The Perceptron’s design was much like that of the modern neural net, except that it had only one layer with adjustable weights and thresholds, sandwiched between input and output layers.

Perceptrons were an active area of research in both psychology and the fledgling discipline of computer science until 1959, when Minsky and Papert published a book titled “Perceptrons,” which demonstrated that executing certain fairly common computations on Perceptrons would be impractically time consuming.

“Of course, all of these limitations kind of disappear if you take machinery that is a little more complicated — like, two layers,” Poggio says. But at the time, the book had a chilling effect on neural-net research.

“You have to put these things in historical context,” Poggio says. “They were arguing for programming — for languages like Lisp. Not many years before, people were still using analog computers. It was not clear at all at the time that programming was the way to go. I think they went a little bit overboard, but as usual, it’s not black and white. If you think of this as this competition between analog computing and digital computing, they fought for what at the time was the right thing.”

Periodicity

By the 1980s, however, researchers had developed algorithms for modifying neural nets’ weights and thresholds that were efficient enough for networks with more than one layer, removing many of the limitations identified by Minsky and Papert. The field enjoyed a renaissance.

But intellectually, there’s something unsatisfying about neural nets. Enough training may revise a network’s settings to the point that it can usefully classify data, but what do those settings mean? What image features is an object recognizer looking at, and how does it piece them together into the distinctive visual signatures of cars, houses, and coffee cups? Looking at the weights of individual connections won’t answer that question.

In recent years, computer scientists have begun to come up with ingenious methods for deducing the analytic strategies adopted by neural nets. But in the 1980s, the networks’ strategies were indecipherable. So around the turn of the century, neural networks were supplanted by support vector machines, an alternative approach to machine learning that’s based on some very clean and elegant mathematics.

The recent resurgence in neural networks — the deep-learning revolution — comes courtesy of the computer-game industry. The complex imagery and rapid pace of today’s video games require hardware that can keep up, and the result has been the graphics processing unit (GPU), which packs thousands of relatively simple processing cores on a single chip. It didn’t take long for researchers to realize that the architecture of a GPU is remarkably like that of a neural net.

Modern GPUs enabled the one-layer networks of the 1960s and the two- to three-layer networks of the 1980s to blossom into the 10-, 15-, even 50-layer networks of today. That’s what the “deep” in “deep learning” refers to — the depth of the network’s layers. And currently, deep learning is responsible for the best-performing systems in almost every area of artificial-intelligence research.

Under the hood

The networks’ opacity is still unsettling to theorists, but there’s headway on that front, too. In addition to directing the Center for Brains, Minds, and Machines (CBMM), Poggio leads the center’s research program in Theoretical Frameworks for Intelligence. Recently, Poggio and his CBMM colleagues have released a three-part theoretical study of neural networks.

The first part, which was published last month in the International Journal of Automation and Computing, addresses the range of computations that deep-learning networks can execute and when deep networks offer advantages over shallower ones. Parts two and three, which have been released as CBMM technical reports, address the problems of global optimization, or guaranteeing that a network has found the settings that best accord with its training data, and overfitting, or cases in which the network becomes so attuned to the specifics of its training data that it fails to generalize to other instances of the same categories.

There are still plenty of theoretical questions to be answered, but CBMM researchers’ work could help ensure that neural networks finally break the generational cycle that has brought them in and out of favor for seven decades.

This image from MIT illustrates a ‘modern’ neural network,

Most applications of deep learning use “convolutional” neural networks, in which the nodes of each layer are clustered, the clusters overlap, and each cluster feeds data to multiple nodes (orange and green) of the next layer. Image: Jose-Luis Olivares/MIT

h/t phys.org April 17, 2017

One final note, I wish the folks at MIT had an ‘explainer’ archive. I’m not sure how to find any more ‘explainers on MIT’s website.

Worm-inspired gel material and soft robots

The Nereis virens worm inspired new research out of the MIT Laboratory for Atomistic and Molecular Mechanics. Its jaw is made of soft organic material, but is as strong as harder materials such as human dentin. Photo: Alexander Semenov/Wikimedia Commons

What an amazing worm! Here’s more about robots inspired by the Nereis virens worm in a March 20, 2017 news item on Nanowerk,

A new material that naturally adapts to changing environments was inspired by the strength, stability, and mechanical performance of the jaw of a marine worm. The protein material, which was designed and modeled by researchers from the Laboratory for Atomistic and Molecular Mechanics (LAMM) in the Department of Civil and Environmental Engineering (CEE) [at the Massachusetts Institute of Technology {MIT}], and synthesized in collaboration with the Air Force Research Lab (AFRL) at Wright-Patterson Air Force Base, Ohio, expands and contracts based on changing pH levels and ion concentrations. It was developed by studying how the jaw of Nereis virens, a sand worm, forms and adapts in different environments.

The resulting pH- and ion-sensitive material is able to respond and react to its environment. Understanding this naturally-occurring process can be particularly helpful for active control of the motion or deformation of actuators for soft robotics and sensors without using external power supply or complex electronic controlling devices. It could also be used to build autonomous structures.

A March 20, 2017 MIT news release, which originated the news item, provides more detail,

“The ability of dramatically altering the material properties, by changing its hierarchical structure starting at the chemical level, offers exciting new opportunities to tune the material, and to build upon the natural material design towards new engineering applications,” wrote Markus J. Buehler, the McAfee Professor of Engineering, head of CEE, and senior author of the paper.

The research, recently published in ACS Nano, shows that depending on the ions and pH levels in the environment, the protein material expands and contracts into different geometric patterns. When the conditions change again, the material reverts back to its original shape. This makes it particularly useful for smart composite materials with tunable mechanics and self-powered roboticists that use pH value and ion condition to change the material stiffness or generate functional deformations.

Finding inspiration in the strong, stable jaw of a marine worm

In order to create bio-inspired materials that can be used for soft robotics, sensors, and other uses — such as that inspired by the Nereis — engineers and scientists at LAMM and AFRL needed to first understand how these materials form in the Nereis worm, and how they ultimately behave in various environments. This understanding involved the development of a model that encompasses all different length scales from the atomic level, and is able to predict the material behavior. This model helps to fully understand the Nereis worm and its exceptional strength.

“Working with AFRL gave us the opportunity to pair our atomistic simulations with experiments,” said CEE research scientist Francisco Martin-Martinez. AFRL experimentally synthesized a hydrogel, a gel-like material made mostly of water, which is composed of recombinant Nvjp-1 protein responsible for the structural stability and impressive mechanical performance of the Nereis jaw. The hydrogel was used to test how the protein shrinks and changes behavior based on pH and ions in the environment.

The Nereis jaw is mostly made of organic matter, meaning it is a soft protein material with a consistency similar to gelatin. In spite of this, its strength, which has been reported to have a hardness ranging between 0.4 and 0.8 gigapascals (GPa), is similar to that of harder materials like human dentin. “It’s quite remarkable that this soft protein material, with a consistency akin to Jell-O, can be as strong as calcified minerals that are found in human dentin and harder materials such as bones,” Buehler said.

At MIT, the researchers looked at the makeup of the Nereis jaw on a molecular scale to see what makes the jaw so strong and adaptive. At this scale, the metal-coordinated crosslinks, the presence of metal in its molecular structure, provide a molecular network that makes the material stronger and at the same time make the molecular bond more dynamic, and ultimately able to respond to changing conditions. At the macroscopic scale, these dynamic metal-protein bonds result in an expansion/contraction behavior.

Combining the protein structural studies from AFRL with the molecular understanding from LAMM, Buehler, Martin-Martinez, CEE Research Scientist Zhao Qin, and former PhD student Chia-Ching Chou ’15, created a multiscale model that is able to predict the mechanical behavior of materials that contain this protein in various environments. “These atomistic simulations help us to visualize the atomic arrangements and molecular conformations that underlay the mechanical performance of these materials,” Martin-Martinez said.

Specifically, using this model the research team was able to design, test, and visualize how different molecular networks change and adapt to various pH levels, taking into account the biological and mechanical properties.

By looking at the molecular and biological makeup of a the Nereis virens and using the predictive model of the mechanical behavior of the resulting protein material, the LAMM researchers were able to more fully understand the protein material at different scales and provide a comprehensive understanding of how such protein materials form and behave in differing pH settings. This understanding guides new material designs for soft robots and sensors.

Identifying the link between environmental properties and movement in the material

The predictive model explained how the pH sensitive materials change shape and behavior, which the researchers used for designing new PH-changing geometric structures. Depending on the original geometric shape tested in the protein material and the properties surrounding it, the LAMM researchers found that the material either spirals or takes a Cypraea shell-like shape when the pH levels are changed. These are only some examples of the potential that this new material could have for developing soft robots, sensors, and autonomous structures.

Using the predictive model, the research team found that the material not only changes form, but it also reverts back to its original shape when the pH levels change. At the molecular level, histidine amino acids present in the protein bind strongly to the ions in the environment. This very local chemical reaction between amino acids and metal ions has an effect in the overall conformation of the protein at a larger scale. When environmental conditions change, the histidine-metal interactions change accordingly, which affect the protein conformation and in turn the material response.

“Changing the pH or changing the ions is like flipping a switch. You switch it on or off, depending on what environment you select, and the hydrogel expands or contracts” said Martin-Martinez.

LAMM found that at the molecular level, the structure of the protein material is strengthened when the environment contains zinc ions and certain pH levels. This creates more stable metal-coordinated crosslinks in the material’s molecular structure, which makes the molecules more dynamic and flexible.

This insight into the material’s design and its flexibility is extremely useful for environments with changing pH levels. Its response of changing its figure to changing acidity levels could be used for soft robotics. “Most soft robotics require power supply to drive the motion and to be controlled by complex electronic devices. Our work toward designing of multifunctional material may provide another pathway to directly control the material property and deformation without electronic devices,” said Qin.

By studying and modeling the molecular makeup and the behavior of the primary protein responsible for the mechanical properties ideal for Nereis jaw performance, the LAMM researchers are able to link environmental properties to movement in the material and have a more comprehensive understanding of the strength of the Nereis jaw.

Here’s link to and a citation for the paper,

Ion Effect and Metal-Coordinated Cross-Linking for Multiscale Design of Nereis Jaw Inspired Mechanomutable Materials by Chia-Ching Chou, Francisco J. Martin-Martinez, Zhao Qin, Patrick B. Dennis, Maneesh K. Gupta, Rajesh R. Naik, and Markus J. Buehler. ACS Nano, 2017, 11 (2), pp 1858–1868 DOI: 10.1021/acsnano.6b07878 Publication Date (Web): February 6, 2017

Copyright © 2017 American Chemical Society

This paper is behind a paywall.

Tree-on-a-chip

It’s usually organ-on-a-chip or lab-on-a-chip or human-on-a-chip; this is my first tree-on-a-chip.

Engineers have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and other plants. Courtesy: MIT

From a March 20, 2017 news item on phys.org,

Trees and other plants, from towering redwoods to diminutive daisies, are nature’s hydraulic pumps. They are constantly pulling water up from their roots to the topmost leaves, and pumping sugars produced by their leaves back down to the roots. This constant stream of nutrients is shuttled through a system of tissues called xylem and phloem, which are packed together in woody, parallel conduits.

Now engineers at MIT [Massachusetts Institute of Technology] and their collaborators have designed a microfluidic device they call a “tree-on-a-chip,” which mimics the pumping mechanism of trees and plants. Like its natural counterparts, the chip operates passively, requiring no moving parts or external pumps. It is able to pump water and sugars through the chip at a steady flow rate for several days. The results are published this week in Nature Plants.

A March 20, 2017 MIT news release by Jennifer Chu, which originated the news item, describes the work in more detail,

Anette “Peko” Hosoi, professor and associate department head for operations in MIT’s Department of Mechanical Engineering, says the chip’s passive pumping may be leveraged as a simple hydraulic actuator for small robots. Engineers have found it difficult and expensive to make tiny, movable parts and pumps to power complex movements in small robots. The team’s new pumping mechanism may enable robots whose motions are propelled by inexpensive, sugar-powered pumps.

“The goal of this work is cheap complexity, like one sees in nature,” Hosoi says. “It’s easy to add another leaf or xylem channel in a tree. In small robotics, everything is hard, from manufacturing, to integration, to actuation. If we could make the building blocks that enable cheap complexity, that would be super exciting. I think these [microfluidic pumps] are a step in that direction.”

Hosoi’s co-authors on the paper are lead author Jean Comtet, a former graduate student in MIT’s Department of Mechanical Engineering; Kaare Jensen of the Technical University of Denmark; and Robert Turgeon and Abraham Stroock, both of Cornell University.

A hydraulic lift

The group’s tree-inspired work grew out of a project on hydraulic robots powered by pumping fluids. Hosoi was interested in designing hydraulic robots at the small scale, that could perform actions similar to much bigger robots like Boston Dynamic’s Big Dog, a four-legged, Saint Bernard-sized robot that runs and jumps over rough terrain, powered by hydraulic actuators.

“For small systems, it’s often expensive to manufacture tiny moving pieces,” Hosoi says. “So we thought, ‘What if we could make a small-scale hydraulic system that could generate large pressures, with no moving parts?’ And then we asked, ‘Does anything do this in nature?’ It turns out that trees do.”

The general understanding among biologists has been that water, propelled by surface tension, travels up a tree’s channels of xylem, then diffuses through a semipermeable membrane and down into channels of phloem that contain sugar and other nutrients.

The more sugar there is in the phloem, the more water flows from xylem to phloem to balance out the sugar-to-water gradient, in a passive process known as osmosis. The resulting water flow flushes nutrients down to the roots. Trees and plants are thought to maintain this pumping process as more water is drawn up from their roots.

“This simple model of xylem and phloem has been well-known for decades,” Hosoi says. “From a qualitative point of view, this makes sense. But when you actually run the numbers, you realize this simple model does not allow for steady flow.”

In fact, engineers have previously attempted to design tree-inspired microfluidic pumps, fabricating parts that mimic xylem and phloem. But they found that these designs quickly stopped pumping within minutes.

It was Hosoi’s student Comtet who identified a third essential part to a tree’s pumping system: its leaves, which produce sugars through photosynthesis. Comtet’s model includes this additional source of sugars that diffuse from the leaves into a plant’s phloem, increasing the sugar-to-water gradient, which in turn maintains a constant osmotic pressure, circulating water and nutrients continuously throughout a tree.

Running on sugar

With Comtet’s hypothesis in mind, Hosoi and her team designed their tree-on-a-chip, a microfluidic pump that mimics a tree’s xylem, phloem, and most importantly, its sugar-producing leaves.

To make the chip, the researchers sandwiched together two plastic slides, through which they drilled small channels to represent xylem and phloem. They filled the xylem channel with water, and the phloem channel with water and sugar, then separated the two slides with a semipermeable material to mimic the membrane between xylem and phloem. They placed another membrane over the slide containing the phloem channel, and set a sugar cube on top to represent the additional source of sugar diffusing from a tree’s leaves into the phloem. They hooked the chip up to a tube, which fed water from a tank into the chip.

With this simple setup, the chip was able to passively pump water from the tank through the chip and out into a beaker, at a constant flow rate for several days, as opposed to previous designs that only pumped for several minutes.

“As soon as we put this sugar source in, we had it running for days at a steady state,” Hosoi says. “That’s exactly what we need. We want a device we can actually put in a robot.”

Hosoi envisions that the tree-on-a-chip pump may be built into a small robot to produce hydraulically powered motions, without requiring active pumps or parts.

“If you design your robot in a smart way, you could absolutely stick a sugar cube on it and let it go,” Hosoi says.

This research was supported, in part, by the Defense Advance Research Projects Agency [DARPA].

This research’s funding connection to DARPA reminded me that MIT has an Institute of Soldier Nanotechnologies.

Getting back to the tree-on-a-chip, here’s a link to and a citation for the paper,

Passive phloem loading and long-distance transport in a synthetic tree-on-a-chip by Jean Comtet, Kaare H. Jensen, Robert Turgeon, Abraham D. Stroock & A. E. Hosoi. Nature Plants 3, Article number: 17032 (2017)  doi:10.1038/nplants.2017.32 Published online: 20 March 2017

This paper is behind a paywall.

Formation of a time (temporal) crystal

It’s a crystal arranged in time according to a March 8, 2017 University of Texas at Austin news release (also on EurekAlert), Note: Links have been removed,

Salt, snowflakes and diamonds are all crystals, meaning their atoms are arranged in 3-D patterns that repeat. Today scientists are reporting in the journal Nature on the creation of a phase of matter, dubbed a time crystal, in which atoms move in a pattern that repeats in time rather than in space.

The atoms in a time crystal never settle down into what’s known as thermal equilibrium, a state in which they all have the same amount of heat. It’s one of the first examples of a broad new class of matter, called nonequilibrium phases, that have been predicted but until now have remained out of reach. Like explorers stepping onto an uncharted continent, physicists are eager to explore this exotic new realm.

“This opens the door to a whole new world of nonequilibrium phases,” says Andrew Potter, an assistant professor of physics at The University of Texas at Austin. “We’ve taken these theoretical ideas that we’ve been poking around for the last couple of years and actually built it in the laboratory. Hopefully, this is just the first example of these, with many more to come.”

Some of these nonequilibrium phases of matter may prove useful for storing or transferring information in quantum computers.

Potter is part of the team led by researchers at the University of Maryland who successfully created the first time crystal from ions, or electrically charged atoms, of the element ytterbium. By applying just the right electrical field, the researchers levitated 10 of these ions above a surface like a magician’s assistant. Next, they whacked the atoms with a laser pulse, causing them to flip head over heels. Then they hit them again and again in a regular rhythm. That set up a pattern of flips that repeated in time.

Crucially, Potter noted, the pattern of atom flips repeated only half as fast as the laser pulses. This would be like pounding on a bunch of piano keys twice a second and notes coming out only once a second. This weird quantum behavior was a signature that he and his colleagues predicted, and helped confirm that the result was indeed a time crystal.

The team also consists of researchers at the National Institute of Standards and Technology, the University of California, Berkeley and Harvard University, in addition to the University of Maryland and UT Austin.

Frank Wilczek, a Nobel Prize-winning physicist at the Massachusetts Institute of Technology, was teaching a class about crystals in 2012 when he wondered whether a phase of matter could be created such that its atoms move in a pattern that repeats in time, rather than just in space.

Potter and his colleague Norman Yao at UC Berkeley created a recipe for building such a time crystal and developed ways to confirm that, once you had built such a crystal, it was in fact the real deal. That theoretical work was announced publically last August and then published in January in the journal Physical Review Letters.

A team led by Chris Monroe of the University of Maryland in College Park built a time crystal, and Potter and Yao helped confirm that it indeed had the properties they predicted. The team announced that breakthrough—constructing a working time crystal—last September and is publishing the full, peer-reviewed description today in Nature.

A team led by Mikhail Lukin at Harvard University created a second time crystal a month after the first team, in that case, from a diamond.

Here’s a link to and a citation for the paper,

Observation of a discrete time crystal by J. Zhang, P. W. Hess, A. Kyprianidis, P. Becker, A. Lee, J. Smith, G. Pagano, I.-D. Potirniche, A. C. Potter, A. Vishwanath, N. Y. Yao, & C. Monroe. Nature 543, 217–220 (09 March 2017) doi:10.1038/nature21413 Published online 08 March 2017

This paper is behind a paywall.