Tag Archives: Harvard University

Pillars of Creation in new 3D visualization

A June 26, 2024 news item on phys.org announced a visualization of the Pillars r Creation,

Made famous in 1995 by NASA’s [US National Aeronautics and Space Administration] Hubble Space Telescope, the Pillars of Creation in the heart of the Eagle Nebula have captured imaginations worldwide with their arresting, ethereal beauty.

Now, NASA has released a new 3D visualization of these towering celestial structures using data from NASA’s Hubble and James Webb space telescopes. This is the most comprehensive and detailed multiwavelength movie yet of these star-birthing clouds.

A June 26, 2024 NASA news release (also on EurekAlert), which originated the news item, provides detail about the pillars and the visualization, Note: The news release on EurekAlert has its entire text located in the caption for the image,

“By flying past and amongst the pillars, viewers experience their three-dimensional structure and see how they look different in the Hubble visible-light view versus the Webb infrared-light view,” explained principal visualization scientist Frank Summers of the Space Telescope Science Institute (STScI) in Baltimore, who led the movie development team for NASA’s Universe of Learning. “The contrast helps them understand why we have more than one space telescope to observe different aspects of the same object.”

The four Pillars of Creation, made primarily of cool molecular hydrogen and dust, are being eroded by the fierce winds and punishing ultraviolet light of nearby hot, young stars. Finger-like structures larger than the solar system protrude from the tops of the pillars. Within these fingers can be embedded, embryonic stars. The tallest pillar stretches across three light-years, three-quarters of the distance between our Sun and the next nearest star.

The movie takes visitors into the three-dimensional structures of the pillars. Rather than an artistic interpretation, the video is based on observational data from a science paper led by Anna McLeod, an associate professor at the University of Durham in the United Kingdom. McLeod also served as a scientific advisor on the movie project.

“The Pillars of Creation were always on our minds to create in 3D. Webb data in combination with Hubble data allowed us to see the Pillars in more complete detail,” said production lead Greg Bacon of STScI. “Understanding the science and how to best represent it allowed our small, talented team to meet the challenge of visualizing this iconic structure.”

The new visualization helps viewers experience how two of the world’s most powerful space telescopes work together to provide a more complex and holistic portrait of the pillars. Hubble sees objects that glow in visible light, at thousands of degrees. Webb’s infrared vision, which is sensitive to cooler objects with temperatures of just hundreds of degrees, pierces through obscuring dust to see stars embedded in the pillars.

“When we combine observations from NASA’s space telescopes across different wavelengths of light, we broaden our understanding of the universe,” said Mark Clampin, Astrophysics Division director at NASA Headquarters in Washington. “The Pillars of Creation region continues to offer us new insights that hone our understanding of how stars form. Now, with this new visualization, everyone can experience this rich, captivating landscape in a new way.”

Produced for NASA by STScI with partners at Caltech/IPAC, and developed by the AstroViz Project of NASA’s Universe of Learning, the 3D visualization is part of a longer, narrated video that combines a direct connection to the science and scientists of NASA’s Astrophysics missions with attention to the needs of an audience of youth, families, and lifelong learners. It enables viewers to explore fundamental questions in science, experience how science is done, and discover the universe for themselves.

Several stages of star formation are highlighted in the visualization. As viewers approach the central pillar, they see at its top an embedded, infant protostar glimmering bright red in infrared light. Near the top of the left pillar is a diagonal jet of material ejected from a newborn star. Though the jet is evidence of star birth, viewers can’t see the star itself. Finally, at the end of one of the left pillar’s protruding “fingers” is a blazing, brand-new star.

A bonus product from this visualization is a new 3D printable model of the Pillars of Creation

. The base model of the four pillars used in the visualization has been adapted to the STL file format, so that viewers can download the model file and print it out on 3D printers. Examining the structure of the pillars in this tactile and interactive way adds new perspectives and insights to the overall experience.

More visualizations and connections between the science of nebulas and learners can be explored through other products produced by NASA’s Universe of Learning such as ViewSpace, a video exhibit that is currently running at almost 200 museums and planetariums across the United States. Visitors can go beyond video to explore the images produced by space telescopes with interactive tools now available for museums and planetariums.

NASA’s Universe of Learning materials are based upon work supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute, working in partnership with Caltech/IPAC, Pasadena, California, Center for Astrophysics | Harvard & Smithsonian, Cambridge, Massachusetts, and Jet Propulsion Laboratory, La Cañada Flintridge, California.

Enjoyr:

Moving past xenobots (living robots based on frog stem cells)

Laura Tran’s June 14, 2024 article for The Scientist gives both a brief history of Michael Levin’s and his team’s work on developing living robots using stem cells from an African clawed frog (known as Xenopus laevis) and offers an update on the team’s work into synthetic lifeforms. First, the xenobots, Note 1: This could be difficult for people with issues regarding animal experimentation Note 1: Links have been removed,

Ibegan with little pieces of embryos scooting around in a dish. In 1998, these unassuming cells caught the attention of Michael Levin, then a postdoctoral researcher studying cell biology at Harvard University. He recalled simply recording a video before tucking the memory away. Nearly two decades later, Levin, now a developmental and synthetic biologist at Tufts University, experienced a sense of déjà vu. He observed that as a student transplanted tissues from one embryo to another, some loose cells swam free in the dish. 

Levin had a keen interest in the collective intelligence of cells, tissues, organs, and artificial constructs within regenerative medicine, and he wondered if he could explore the plasticity and harness the untapped capabilities of these swirling embryonic stem cells. “At that point, I started thinking that this is probably an amazing biorobotics platform,” recalled Levin. He rushed to describe this idea to Douglas Blackiston, a developmental and synthetic biologist at Tufts University who worked alongside Levin. 

At the time, Blackiston was conducting plasticity research to restore vision in blind African clawed frog tadpoles, Xenopus laevis, a model organism used to understand development. Blackiston transplanted the eyes to unusual places, such as the back of the head or even the tail, to test the integration of transplanted sensory organs.1 The eye axons extended to either the gut or spinal cord. In a display of dynamic plasticity, transplanted eyes on the tail that extended an optic nerve into the spinal cord restored the tadpoles’ vision.2 

In a similar vein, Josh Bongard, an evolutionary roboticist at the University of Vermont and Levin’s longtime colleague, pondered how robots could evolve like animals. He wanted to apply biological evolution to a machine by tinkering with the brains and bodies of robots and explored this idea with Sam Kriegman, then a graduate student in Bongard’s group and now an assistant professor at Northwestern University. Kriegman used evolutionary algorithms and artificial intelligence (AI) to simulate biological evolution in a virtual creature before teaming up with engineers to construct a physical version. 

i have two stories about the Xenobots. I was a little late to the party, so, the June 21, 2021 posting is about xenobots 2.0 and their ability to move and the June 8, 2022 posting is about their ability to reproduce.

Tran’s June 14, 2024 article provides the latest update, Note: Links have been removed,

Evolving Beyond the Xenobot

“People thought this was a one-off froggy-specific result, but this is a very profound thing,” emphasized Levin. To demonstrate its translatability in a non-frog model, he wondered, “What’s the furthest from an embryonic frog? Well, that would be an adult human.”

He enlisted the help of Gizem Gumuskaya, a synthetic biologist with an architectural background in Levin’s group, to tackle this challenge of creating biological robots using human cells to create anthrobots.8 While Gumuskaya was not involved with the development of xenobots, she drew inspiration from their design. By using adult human tracheal cells, she found that adult cells still displayed morphologic plasticity.

There are several key differences between xenobots and anthrobots: species, cell source (embryonic or adult), and the anthrobots’ ability to self-assemble without manipulation. “When considering applications, as a rule of thumb, xenobots are better suited to the environment. They exhibit higher durability, require less maintenance, and can coexist within the environment,” said Gumuskaya.

Meanwhile, there is greater potential for the use of mammalian-derived biobots in biomedical applications. This could include localized drug delivery, deposition into the arteries to break up plaque buildup, or deploying anthrobots into tissue to act as biosensors. “[Anthrobots] are poised as a personalized agent with the same DNA but new functionality,” remarked Gumuskaya.

Here’s a link to and a citation for the team’s latest paper,

Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells by Gizem Gumuskaya, Pranjal Srivastava, Ben G. Cooper, Hannah Lesser, Ben Semegran, Simon Garnier, Michael Levin. Advanced Science Volume 11, Issue 4 January 26, 2024 2303575 DOI: https://doi.org/10.1002/advs.202303575 First published: 30 November 2023

This paper is open access.

Resurrection consent for digital cloning of the dead

It’s a bit disconcerting to think that one might be resurrected, in this case, digitally, but Dr Masaki Iwasaki has helpfully published a study on attitudes to digital cloning and resurrection consent, which could prove helpful when establishing one’s final wishes.

A January 4, 2024 De Gruyter (publisher) press release (repurposed from a January 4, 2024 blog posting on De Gruyter.com) explains the idea and the study,

In a 2014 episode of sci-fi series Black Mirror, a grieving young widow reconnects with her dead husband using an app that trawls his social media history to mimic his online language, humor and personality. It works. She finds solace in the early interactions – but soon wants more.   

Such a scenario is no longer fiction. In 2017, the company Eternime aimed to create an avatar of a dead person using their digital footprint, but this “Skype for the dead” didn’t catch on. The machine-learning and AI algorithms just weren’t ready for it. Neither were we.

Now, in 2024, amid exploding use of Chat GPT-like programs, similar efforts are on the way. But should digital resurrection be allowed at all? And are we prepared for the legal battles over what constitutes consent?

In a study published in the Asian Journal of Law and Economics, Dr Masaki Iwasaki of Harvard Law School and currently an assistant professor at Seoul National University, explores how the deceased’s consent (or otherwise) affects attitudes to digital resurrection.

US adults were presented with scenarios where a woman in her 20s dies in a car accident. A company offers to bring a digital version of her back, but her consent is, at first, ambiguous. What should her friends decide?

Two options – one where the deceased has consented to digital resurrection and another where she hasn’t – were read by participants at random. They then answered questions about the social acceptability of bringing her back on a five-point rating scale, considering other factors such as ethics and privacy concerns.

Results showed that expressed consent shifted acceptability two points higher compared to dissent. “Although I expected societal acceptability for digital resurrection to be higher when consent was expressed, the stark difference in acceptance rates – 58% for consent versus 3% for dissent – was surprising,” says Iwasaki. “This highlights the crucial role of the deceased’s wishes in shaping public opinion on digital resurrection.”

In fact, 59% of respondents disagreed with their own digital resurrection, and around 40% of respondents did not find any kind of digital resurrection socially acceptable, even with expressed consent. “While the will of the deceased is important in determining the societal acceptability of digital resurrection, other factors such as ethical concerns about life and death, along with general apprehension towards new technology are also significant,” says Iwasaki.  

The results reflect a discrepancy between existing law and public sentiment. People’s general feelings – that the dead’s wishes should be respected – are actually not protected in most countries. The digitally recreated John Lennon in the film Forrest Gump, or animated hologram of Amy Winehouse reveal the ‘rights’ of the dead are easily overridden by those in the land of the living.

So, is your digital destiny something to consider when writing your will? It probably should be but in the current absence of clear legal regulations on the subject, the effectiveness of documenting your wishes in such a way is uncertain. For a start, how such directives are respected varies by legal jurisdiction. “But for those with strong preferences documenting their wishes could be meaningful,” says Iwasaki. “At a minimum, it serves as a clear communication of one’s will to family and associates, and may be considered when legal foundations are better established in the future.”

It’s certainly a conversation worth having now. Many generative AI chatbot services, such as like Replika (“The AI companion who cares”) and Project December (“Simulate the dead”) already enable conversations with chatbots replicating real people’s personalities. The service ‘You, Only Virtual’ (YOV) allows users to upload someone’s text messages, emails and voice conversations to create a ‘versona’ chatbot. And, in 2020, Microsoft obtained a patent to create chatbots from text, voice and image data for living people as well as for historical figures and fictional characters, with the option of rendering in 2D or 3D.

Iwasaki says he’ll investigate this and the digital resurrection of celebrities in future research. “It’s necessary first to discuss what rights should be protected, to what extent, then create rules accordingly,” he explains. “My research, building upon prior discussions in the field, argues that the opt-in rule requiring the deceased’s consent for digital resurrection might be one way to protect their rights.”

There is a link to the study in the press release above but this includes a citation, of sorts,

Digital Cloning of the Dead: Exploring the Optimal Default Rule by Masaki Iwasaki. Asian Journal of Law and Economics DOI: https://doi.org/10.1515/ajle-2023-0125 Published Online: 2023-12-27

This paper is open access.

Health/science journalists/editors: deadline is March 22, 2024 for media boot camp in Boston, Massachusetts

A February 14, 2023 Broad Institute news release presents an exciting opportunity for health/science journalists and editors,

The Broad Institute of MIT [Massachusetts Institute of Technology] and Harvard is now accepting applications for its 2024 Media Boot Camp.

This annual program connects health/science journalists and editors with faculty from the Broad Institute, Massachusetts Institute of Technology, Harvard University, and Harvard’s teaching hospitals for a two-day event exploring the latest advances in genomics and biomedicine. Journalists will explore possible future storylines, gain fundamental background knowledge, and build relationships with researchers. The program format includes presentations, discussions, and lab tours.

The 2024 Media Boot Camp will take place in person at the Broad Institute in Cambridge, MA on Thursday, May 16 and Friday, May 17 (with an evening welcome reception on Wednesday, May 15).

APPLICATION DEADLINE IS FRIDAY, MARCH 22 (5:00 PM US EASTERN TIME).

2024 Boot Camp topics include:

  • Gene editing
  • New approaches for therapeutic delivery  
  • Cancer biology, drug development
  • Data sciences, machine learning
  • Neurobiology (stem cell models of psychiatric disorders)
  • Antibiotic resistance, microbial biology
  • Medical and population genetics, genomic medicine

Current speakers include: Mimi Bandopadhayay, Clare Bernard,Roby Bhattacharyya, Todd Golub, Laura Kiessling, Eric Lander,David Liu, Ralda Nehme,Heidi Rehm, William Sellers, Feng Zhang, with potentially more to come.

This Media Boot Camp is an educational offering. All presentations are on-background.

Hotel accommodations and meals during the program will be provided by the Broad Institute. Attendees must cover travel costs to and from Boston.

Application Process

By Friday, March 22 [2024] (5:00 PM US Eastern time [2 pm PT]), please send at least one paragraph describing your interest in the program and how you hope it will benefit your reporting, as well as three recent news clips, to David Cameron, Director of External Communications, dcameron@broadinstitute.org

Please contact David at dcameron@broadinstitute.org, or 617-714-7184 with any questions.

I couldn’t find details about eligibility, that said, I wish you good luck with your ‘paragraph and three recent clips’ submission.

Need to improve oversight on chimeric human-animal research

It seems chimeras are of more interest these days. In all likelihood that has something to do with the fellow who received a transplant of a pig’s heart in January 2022 (he died in March 2022).

For those who aren’t familiar with the term, a chimera is an entity with two different DNA (deoxyribonucleic acid) identities. In short, if you get a DNA sample from the heart, it’s different from a DNA sample obtained from a cheek swab. This contrasts with a hybrid such as a mule (donkey/horse) whose DNA samples show a consisted identity throughout its body.

A December 12, 2022 The Hastings Center news release (also on EurekAlert) announces a special report,

A new report on the ethics of crossing species boundaries by inserting human cells into nonhuman animals – research surrounded by debate – makes recommendations clarifying the ethical issues and calling for improved oversight of this work.

The report, “Creating Chimeric Animals — Seeking Clarity On Ethics and Oversight,” was developed by an interdisciplinary team, with funding from the National Institutes of Health. Principal investigators are Josephine Johnston and Karen Maschke, research scholars at The Hastings Center, and Insoo Hyun, director of the Center for Life Sciences and Public Learning at the Museum of Life Sciences in Boston, formerly of Case Western Reserve University.

Advances in human stem cell science and gene editing enable scientists to insert human cells more extensively and precisely into nonhuman animals, creating “chimeric” animals, embryos, and other organisms that contain a mix of human and nonhuman cells.

Many people hope that this research will yield enormous benefits, including better models of human disease, inexpensive sources of human eggs and embryos for research, and sources of tissues and organs suitable for transplantation into humans. 

But there are ethical concerns about this type of research, which raise questions such as whether the moral status of nonhuman animals is altered by the insertion of human stem cells, whether these studies should be subject to additional prohibitions or oversight, and whether this kind of research should be done at all.

The report found that:

Animal welfare is a primary ethical issue and should be a focus of ethical and policy analysis as well as the governance and oversight of chimeric research.

Chimeric studies raise the possibility of unique or novel harms resulting from the insertion and development of human stem cells in nonhuman animals, particularly when those cells develop in the brain or central nervous system.

Oversight and governance of chimeric research are siloed, and public communication is minimal. Public communication should be improved, communication between the different committees involved in oversight at each institution should be enhanced, and a national mechanism created for those involved in oversight of these studies. 

Scientists, journalists, bioethicists, and others writing about chimeric research should use precise and accessible language that clarifies rather than obscures the ethical issues at stake. The terms “chimera,” which in Greek mythology refers to a fire-breathing monster, and “humanization” are examples of ethically laden, or overly broad language to be avoided.

The Research Team

The Hastings Center

• Josephine Johnston
• Karen J. Maschke
• Carolyn P. Neuhaus
• Margaret M. Matthews
• Isabel Bolo

Case Western Reserve University
• Insoo Hyun (now at Museum of Science, Boston)
• Patricia Marshall
• Kaitlynn P. Craig

The Work Group

• Kara Drolet, Oregon Health & Science University
• Henry T. Greely, Stanford University
• Lori R. Hill, MD Anderson Cancer Center
• Amy Hinterberger, King’s College London
• Elisa A. Hurley, Public Responsibility in Medicine and Research
• Robert Kesterson, University of Alabama at Birmingham
• Jonathan Kimmelman, McGill University
• Nancy M. P. King, Wake Forest University School of Medicine
• Geoffrey Lomax, California Institute for Regenerative Medicine
• Melissa J. Lopes, Harvard University Embryonic Stem Cell Research Oversight Committee
• P. Pearl O’Rourke, Harvard Medical School
• Brendan Parent, NYU Grossman School of Medicine
• Steven Peckman, University of California, Los Angeles
• Monika Piotrowska, State University of New York at Albany
• May Schwarz, The Salk Institute for Biological Studies
• Jeff Sebo, New York University
• Chris Stodgell, University of Rochester
• Robert Streiffer, University of Wisconsin-Madison
• Lorenz Studer, Memorial Sloan Kettering Cancer Center
• Amy Wilkerson, The Rockefeller University

Here’s a link to and a citation for the report,

Creating Chimeric Animals: Seeking Clarity on Ethics and Oversight edited by Karen J. Maschke, Margaret M. Matthews, Kaitlynn P. Craig, Carolyn P. Neuhaus, Insoo Hyun, Josephine Johnston, The Hastings Center Report Volume 52, Issue S2 (Special Report), November‐December 2022 First Published: 09 December 2022

This report is open access.

Water-based ionic computing (neural computing networks)

An ionic circuit comprising hundreds of ionic transistors
Caption: An ionic circuit comprising hundreds of ionic transistors. Credit: Woo-Bin Jung/Harvard SEAS

I love that image and it pertains to this September 29, 2022 news item on ScienceDaily,

Microprocessors in smartphones, computers, and data centers process information by manipulating electrons through solid semiconductors but our brains have a different system. They rely on the manipulation of ions in liquid to process information.

Inspired by the brain, researchers have long been seeking to develop ‘ionics’ in an aqueous solution. While ions in water move slower than electrons in semiconductors, scientists think the diversity of ionic species with different physical and chemical properties could be harnessed for richer and more diverse information processing.

Ionic computing, however, is still in its early days. To date, labs have only developed individual ionic devices such as ionic diodes and transistors, but no one has put many such devices together into a more complex circuit for computing — until now.

A team of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with DNA Script, a biotech startup, have developed an ionic circuit comprising hundreds of ionic transistors and performed a core process of neural net computing.

A September 28, 2022 Harvard John A. Paulson School of Engineering and Applied Sciences news release (also on EurekAlert but published on Sept. 29, 2022), which originated the news item, provides details (Note: A link has been removed),

The researchers began by building a new type of ionic transistor from a  technique they recently pioneered. The transistor consists of an aqueous solution of quinone molecules, interfaced with two concentric ring electrodes with a center disk electrode, like a bullseye. The two ring electrodes electrochemically lower and tune the local pH around the center disk by producing and trapping hydrogen ions. A voltage applied to the center disk causes an electrochemical reaction to generate an ionic current from the disk into the water. The reaction rate can be sped up or down –– increasing or decreasing the ionic current — by tuning the local pH.  In other words, the pH controls, or gates, the disk’s ionic current in the aqueous solution, creating an ionic counterpart of the electronic transistor.

They then engineered the pH-gated ionic transistor in such a way that the disk current is an arithmetic multiplication of the disk voltage and a “weight” parameter representing the local pH gating the transistor. They organized these transistors into a 16 × 16 array to expand the analog arithmetic multiplication of individual transistors into an analog matrix multiplication, with the array of local pH values serving as a weight matrix encountered in neural networks.

“Matrix multiplication is the most prevalent calculation in neural networks for artificial intelligence,” said Woo-Bin Jung, a postdoctoral fellow at SEAS and the first author of the paper. “Our ionic circuit performs the matrix multiplication in water in an analog manner that is based fully on electrochemical machinery.”

“Microprocessors manipulate electrons in a digital fashion to perform matrix multiplication,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and the senior author of the paper. “While our ionic circuit cannot be as fast or accurate as the digital microprocessors, the electrochemical matrix multiplication in water is charming in its own right, and has a potential to be energy efficient.”

Now, the team looks to enrich the chemical complexity of the system.

“So far, we have used only 3 to 4 ionic species, such as hydrogen and quinone ions, to enable the gating and ionic transport in the aqueous ionic transistor,” said Jung. “It will be very interesting to employ more diverse ionic species and to see how we can exploit them to make rich the contents of information to be processed.”

The research was co-authored by Han Sae Jung, Jun Wang, Henry Hinton, Maxime Fournier, Adrian Horgan, Xavier Godron, and Robert Nicol. It was supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under grant 2019-19081900002.

Here’s a link to and a citation for the paper,

An Aqueous Analog MAC Machine by Woo-Bin Jung, Han Sae Jung, Jun Wang, Henry Hinton, Maxime Fournier, Adrian Horgan, Xavier Godron, Robert Nicol, Donhee Ham. Advanced Materials DOI: https://doi.org/10.1002/adma.202205096 First published online: 23 August 2022

This paper is behind a paywall.

As for the biotech startup mentioned as a collaborative partner in the research, DNA Script can be found here.

A CRISPR (clustered regularly interspaced short palindromic repeats) anniversary

June 2022 was the 10th anniversary of the publication of a study the paved the way for CRISPR-Cas9 gene editing and Sophie Fessl’s June 28, 2022 article for The Scientist offers a brief history (Note: Links have been removed),

Ten years ago, Emmanuelle Charpentier and Jennifer Doudna published the study that paved the way for a new kind of genome editing: the suite of technologies now known as CRISPR. Writing in [the journal] Science, they adapted an RNA-mediated bacterial immune defense into a targeted DNA-altering system. “Our study . . . highlights the potential to exploit the system for RNA-programmable genome editing,” they conclude in the abstract of their paper—a potential that, in the intervening years, transformed the life sciences. 

From gene drives to screens, and diagnostics to therapeutics, CRISPR nucleic acids and the Cas enzymes with which they’re frequently paired have revolutionized how scientists tinker with DNA and RNA. … altering the code of life with CRISPR has been marred by ethical concerns. Perhaps the most prominent example was when Chinese scientist He Jiankui created the first gene edited babies using CRISPR/Cas9 genome editing. Doudna condemned Jiankui’s work, for which he was jailed, as “risky and medically unnecessary” and a “shocking reminder of the scientific and ethical challenges raised by this powerful technology.” 

There’s also the fact that legal battles over who gets to claim ownership of the system’s many applications have persisted almost as long as the technology has been around. Both Doudna and Charpentier’s teams from the University of California, Berkeley, and the University of Vienna and a team led by the Broad Institute’s Feng Zhang claim to be the first to have adapted CRISPR-Cas9 for gene editing in complex cells (eukaryotes). Patent offices in different countries have reached varying decisions, but in the US, the latest rulings say that the Broad Institute of MIT [Massachusetts Institute of Technology] and Harvard retains intellectual property of using CRISPR-Cas9 in eukaryotes, while Emmanuelle Charpentier, the University of California, and the University of Vienna maintain their original patent over using CRISPR-Cas9 for editing in vitro and in prokaryotes. 

Still, despite the controversies, the technique continues to be explored academically and commercially for everything from gene therapy to crop improvement. Here’s a look at seven different ways scientists have utilized CRISPR.

Fessl goes on to give a brief overview of CRISPR and gene drives, genetic screens, diagnostics, including COVID-19 tests, gene therapy, therapeutics, crop and livestock improvement, and basic research.

For anyone interested in the ethical issues (with an in depth look at the Dr. He Jiankui story), I suggest reading either or both Eben Kirksey’s 2020 book, “The Mutant Project; Inside the Global Race to Genetically Modify Humans,”

An anthropologist visits the frontiers of genetics, medicine, and technology to ask: Whose values are guiding gene editing experiments? And what does this new era of scientific inquiry mean for the future of the human species?

“That rare kind of scholarship that is also a page-turner.”
—Britt Wray, author of Rise of the Necrofauna

At a conference in Hong Kong in November 2018, Dr. He Jiankui announced that he had created the first genetically modified babies—twin girls named Lulu and Nana—sending shockwaves around the world. A year later, a Chinese court sentenced Dr. He to three years in prison for “illegal medical practice.”

As scientists elsewhere start to catch up with China’s vast genetic research program, gene editing is fueling an innovation economy that threatens to widen racial and economic inequality. Fundamental questions about science, health, and social justice are at stake: Who gets access to gene editing technologies? As countries loosen regulations around the globe, from the U.S. to Indonesia, can we shape research agendas to promote an ethical and fair society?

Eben Kirksey takes us on a groundbreaking journey to meet the key scientists, lobbyists, and entrepreneurs who are bringing cutting-edge genetic engineering tools like CRISPR—created by Nobel Prize-winning biochemists Jennifer Doudna and Emmanuelle Charpentier—to your local clinic. He also ventures beyond the scientific echo chamber, talking to disabled scholars, doctors, hackers, chronically-ill patients, and activists who have alternative visions of a genetically modified future for humanity.

and/or Kevin Davies’s 2020 book, “Editing Humanity: The CRISPR Revolution and the New Era of Genome Editing,”

One of the world’s leading experts on genetics unravels one of the most important breakthroughs in modern science and medicine. 

If our genes are, to a great extent, our destiny, then what would happen if mankind could engineer and alter the very essence of our DNA coding? Millions might be spared the devastating effects of hereditary disease or the challenges of disability, whether it was the pain of sickle-cell anemia to the ravages of Huntington’s disease.

But this power to “play God” also raises major ethical questions and poses threats for potential misuse. For decades, these questions have lived exclusively in the realm of science fiction, but as Kevin Davies powerfully reveals in his new book, this is all about to change.

Engrossing and page-turning, Editing Humanity takes readers inside the fascinating world of a new gene editing technology called CRISPR, a high-powered genetic toolkit that enables scientists to not only engineer but to edit the DNA of any organism down to the individual building blocks of the genetic code.

Davies introduces readers to arguably the most profound scientific breakthrough of our time. He tracks the scientists on the front lines of its research to the patients whose powerful stories bring the narrative movingly to human scale.

Though the birth of the “CRISPR babies” in China made international news, there is much more to the story of CRISPR than headlines seemingly ripped from science fiction. In Editing Humanity, Davies sheds light on the implications that this new technology can have on our everyday lives and in the lives of generations to come.

Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics. He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome, The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. In 2017, Kevin was selected for a Guggenheim Fellowship in science writing.

I’ve read both books and while some of the same ground is covered, the perspectives diverge somewhat. Both authors offer a more nuanced discussion of the issues than was the case in the original reporting about Dr. He’s work.

Reconfiguring a LEGO-like AI chip with light

MIT engineers have created a reconfigurable AI chip that comprises alternating layers of sensing and processing elements that can communicate with each other. Credit: Figure courtesy of the researchers and edited by MIT News

This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),

Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste. 

Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.

The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.

The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.

“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”

The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.

“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”

The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.

Lighting the way

The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.

In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection. 

“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”

The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.

Stacking up

The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)

The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.

“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.

The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.

“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.

Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”

“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”

This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.

Here’s a link to and a citation for the paper,

Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y

This paper is behind a paywall.