Made famous in 1995 by NASA’s [US National Aeronautics and Space Administration] Hubble Space Telescope, the Pillars of Creation in the heart of the Eagle Nebula have captured imaginations worldwide with their arresting, ethereal beauty.
Now, NASA has released a new 3D visualization of these towering celestial structures using data from NASA’s Hubble and James Webb space telescopes. This is the most comprehensive and detailed multiwavelength movie yet of these star-birthing clouds.
…
A June 26, 2024 NASA news release (also on EurekAlert), which originated the news item, provides detail about the pillars and the visualization, Note: The news release on EurekAlert has its entire text located in the caption for the image,
“By flying past and amongst the pillars, viewers experience their three-dimensional structure and see how they look different in the Hubble visible-light view versus the Webb infrared-light view,” explained principal visualization scientist Frank Summers of the Space Telescope Science Institute (STScI) in Baltimore, who led the movie development team for NASA’s Universe of Learning. “The contrast helps them understand why we have more than one space telescope to observe different aspects of the same object.”
The four Pillars of Creation, made primarily of cool molecular hydrogen and dust, are being eroded by the fierce winds and punishing ultraviolet light of nearby hot, young stars. Finger-like structures larger than the solar system protrude from the tops of the pillars. Within these fingers can be embedded, embryonic stars. The tallest pillar stretches across three light-years, three-quarters of the distance between our Sun and the next nearest star.
The movie takes visitors into the three-dimensional structures of the pillars. Rather than an artistic interpretation, the video is based on observational data from a science paper led by Anna McLeod, an associate professor at the University of Durham in the United Kingdom. McLeod also served as a scientific advisor on the movie project.
“The Pillars of Creation were always on our minds to create in 3D. Webb data in combination with Hubble data allowed us to see the Pillars in more complete detail,” said production lead Greg Bacon of STScI. “Understanding the science and how to best represent it allowed our small, talented team to meet the challenge of visualizing this iconic structure.”
The new visualization helps viewers experience how two of the world’s most powerful space telescopes work together to provide a more complex and holistic portrait of the pillars. Hubble sees objects that glow in visible light, at thousands of degrees. Webb’s infrared vision, which is sensitive to cooler objects with temperatures of just hundreds of degrees, pierces through obscuring dust to see stars embedded in the pillars.
“When we combine observations from NASA’s space telescopes across different wavelengths of light, we broaden our understanding of the universe,” said Mark Clampin, Astrophysics Division director at NASA Headquarters in Washington. “The Pillars of Creation region continues to offer us new insights that hone our understanding of how stars form. Now, with this new visualization, everyone can experience this rich, captivating landscape in a new way.”
Produced for NASA by STScI with partners at Caltech/IPAC, and developed by the AstroViz Project of NASA’s Universe of Learning, the 3D visualization is part of a longer, narrated video that combines a direct connection to the science and scientists of NASA’s Astrophysics missions with attention to the needs of an audience of youth, families, and lifelong learners. It enables viewers to explore fundamental questions in science, experience how science is done, and discover the universe for themselves.
Several stages of star formation are highlighted in the visualization. As viewers approach the central pillar, they see at its top an embedded, infant protostar glimmering bright red in infrared light. Near the top of the left pillar is a diagonal jet of material ejected from a newborn star. Though the jet is evidence of star birth, viewers can’t see the star itself. Finally, at the end of one of the left pillar’s protruding “fingers” is a blazing, brand-new star.
. The base model of the four pillars used in the visualization has been adapted to the STL file format, so that viewers can download the model file and print it out on 3D printers. Examining the structure of the pillars in this tactile and interactive way adds new perspectives and insights to the overall experience.
More visualizations and connections between the science of nebulas and learners can be explored through other products produced by NASA’s Universe of Learning such as ViewSpace, a video exhibit that is currently running at almost 200 museums and planetariums across the United States. Visitors can go beyond video to explore the images produced by space telescopes with interactive tools now available for museums and planetariums.
NASA’s Universe of Learning materials are based upon work supported by NASA under award number NNX16AC65A to the Space Telescope Science Institute, working in partnership with Caltech/IPAC, Pasadena, California, Center for Astrophysics | Harvard & Smithsonian, Cambridge, Massachusetts, and Jet Propulsion Laboratory, La Cañada Flintridge, California.
Laura Tran’s June 14, 2024 article for The Scientist gives both a brief history of Michael Levin’s and his team’s work on developing living robots using stem cells from an African clawed frog (known as Xenopus laevis) and offers an update on the team’s work into synthetic lifeforms. First, the xenobots, Note 1: This could be difficult for people with issues regarding animal experimentation Note 1: Links have been removed,
Ibegan with little pieces of embryos scooting around in a dish. In 1998, these unassuming cells caught the attention of Michael Levin, then a postdoctoral researcher studying cell biology at Harvard University. He recalled simply recording a video before tucking the memory away. Nearly two decades later, Levin, now a developmental and synthetic biologist at Tufts University, experienced a sense of déjà vu. He observed that as a student transplanted tissues from one embryo to another, some loose cells swam free in the dish.
Levin had a keen interest in the collective intelligence of cells, tissues, organs, and artificial constructs within regenerative medicine, and he wondered if he could explore the plasticity and harness the untapped capabilities of these swirling embryonic stem cells. “At that point, I started thinking that this is probably an amazing biorobotics platform,” recalled Levin. He rushed to describe this idea to Douglas Blackiston, a developmental and synthetic biologist at Tufts University who worked alongside Levin.
At the time, Blackiston was conducting plasticity research to restore vision in blind African clawed frog tadpoles, Xenopus laevis, a model organism used to understand development. Blackiston transplanted the eyes to unusual places, such as the back of the head or even the tail, to test the integration of transplanted sensory organs.1 The eye axons extended to either the gut or spinal cord. In a display of dynamic plasticity, transplanted eyes on the tail that extended an optic nerve into the spinal cord restored the tadpoles’ vision.2
…
In a similar vein, Josh Bongard, an evolutionary roboticist at the University of Vermont and Levin’s longtime colleague, pondered how robots could evolve like animals. He wanted to apply biological evolution to a machine by tinkering with the brains and bodies of robots and explored this idea with Sam Kriegman, then a graduate student in Bongard’s group and now an assistant professor at Northwestern University. Kriegman used evolutionary algorithms and artificial intelligence (AI) to simulate biological evolution in a virtual creature before teaming up with engineers to construct a physical version.
…
i have two stories about the Xenobots. I was a little late to the party, so, the June 21, 2021 posting is about xenobots 2.0 and their ability to move and the June 8, 2022 posting is about their ability to reproduce.
“People thought this was a one-off froggy-specific result, but this is a very profound thing,” emphasized Levin. To demonstrate its translatability in a non-frog model, he wondered, “What’s the furthest from an embryonic frog? Well, that would be an adult human.”
He enlisted the help of Gizem Gumuskaya, a synthetic biologist with an architectural background in Levin’s group, to tackle this challenge of creating biological robots using human cells to create anthrobots.8 While Gumuskaya was not involved with the development of xenobots, she drew inspiration from their design. By using adult human tracheal cells, she found that adult cells still displayed morphologic plasticity.
…
There are several key differences between xenobots and anthrobots: species, cell source (embryonic or adult), and the anthrobots’ ability to self-assemble without manipulation. “When considering applications, as a rule of thumb, xenobots are better suited to the environment. They exhibit higher durability, require less maintenance, and can coexist within the environment,” said Gumuskaya.
Meanwhile, there is greater potential for the use of mammalian-derived biobots in biomedical applications. This could include localized drug delivery, deposition into the arteries to break up plaque buildup, or deploying anthrobots into tissue to act as biosensors. “[Anthrobots] are poised as a personalized agent with the same DNA but new functionality,” remarked Gumuskaya.
…
Here’s a link to and a citation for the team’s latest paper,
Motile Living Biobots Self-Construct from Adult Human Somatic Progenitor Seed Cells by Gizem Gumuskaya, Pranjal Srivastava, Ben G. Cooper, Hannah Lesser, Ben Semegran, Simon Garnier, Michael Levin. Advanced Science Volume 11, Issue 4 January 26, 2024 2303575 DOI: https://doi.org/10.1002/advs.202303575 First published: 30 November 2023
This morning (March 26, 2024) a notice from the Science Media Centre of Canada (SMCC) arrived (via email) with two bits of news I’m including here.
A freshly launched online science magazine, Sequencer,
From the Science Media Centre of Canada’s March 26, 2024 notice,
Science journalists launch new online science magazine
Sequencer is a writer-owned, subscriber-based platform to explore the world’s weird, exciting, rage-inducing, or even hilarious phenomena.
There was more about this new science magazine in a March 21, 2024 posting by Neel Dhanesh for the Nieman Lab blog, (Note 1: The Nieman Lab appears to be an initiative of the Nieman Foundation for Journalism at Harvard University; Note 2: Links have been removed),
Last year, freelance journalist and National Geographic alum Michael Greshko predicted that a worker-owned science publication would be born in 2024. On Thursday, his prediction came true with the launch of Sequencer.
Sequencer is looking to fill a gap that’s been created by the withering of science desks at newsrooms across the country. The four founders — Max G. Levy, Dan Samorodnitsky, Shi En Kim, and Maddie Bender, all of whom are alums of Massive Science, which stopped publishing in 2021 — write that “traditional science media is broken” in a letter introducing the site:
…
Like its worker-owned brethren (see: Defector, 404 Media, Hell Gate, Aftermath, and more), Sequencer plans to be reader-supported, with subscriptions starting at $7 per month. I, like many others in the world of science journalism, am incredibly excited. …
There’s more about the cat called Masha, formerly known as Velveteen, and the author in a March 21, 2024 blog posting on Sequencer.
As for Sequencer and its founders, there’s the About Us webpage,
Sequencer is a place to decode our world with stories about science. It’s a venue for readers who care about pressing scientific questions and appreciate the weird, exciting, rage-inducing, spine-tingling, mind-bending, or even hilarious phenomena around us. It’s a platform for perennially curious journalists who don’t take themselves too seriously. It’s an invitation to discover alongside us.
We – Dan, Kim, Maddie, and Max [more about the founders later in this posting] – are not just the writers and editors, we’re also the founders and owners. We’re established science journalists and alumni of The Daily Beast, Scientific American, WIRED, Quanta, Smithsonian, C&EN, and more. We’re also all former scientists. Sequencer is our experiment.
Like any good experiment, Sequencer exists atop a heavily researched, rigorously tested, science-backed hypothesis: Traditional science media is broken, so it’s time for something new.
I found this bit particularly interesting,
This is typically how the sausage gets made in our industry: A scientist, usually someone who works at one of a handful of American or European universities, publishes their new work in a prestigious journal. Their well-funded institution’s PR team crafts a press release, puts the work under embargo, and emails it to journalists on their press list.
When it works, this model earns many important labs their 15 minutes of fame; millions of people learn about a breakthrough. But when it’s the governing model of science journalism, it constrains any content to bounds that are sterile and homogenous. There’s little room for analysis and perspective about the work that goes into doing science, let alone criticism or any reckoning with the future. At a time where climate change is laying waste to the planet and a historic pandemic trudges on and on, science journalism is too-often blank-faced and credulous.
That’s all assuming the model works. More and more, we’re realizing that it doesn’t. Bedrock science publications are dying. Or rather, they’re being actively killed by layoffs, predatory venture capital firms, and mega-conglomerates that keep inexplicably pivoting to video. It would be funny if it weren’t so bleak. [emphases mine]
…
My March 8, 2024 posting, “Science journalism … ch-ch-ch-ch-changes” provides more context for the phrases I’ve highlighted in the excerpt in the above,
Sequencer is subscriber-supported,
Sequencer is subscriber-supported. That means we are cutting out the middleman and going directly to the reader. We depend on your money to power us, and your feedback to shape our coverage. Do you like our stories? Are we missing something? What do normal people want to see in a science publication? Tell us! Email us at hello@sequencermag.com
We’re choosing $7/month because if you live in a major American city, $7 is the price of a latte. We deserve a latte, don’t you think?
Meet the founders,
Maddie Bender
Maddie is a science, health, and technology journalist. She has worked in print, digital, audio, and video media. Some reporting highlights include covering spotted lanternfly fetish content, brain-breaking pseudoscience, and metaverse fitness. She lives in Honolulu, HI, and deeply misses her cat (not dead, just in New York.)
…
Dan Samorodnitsky
Dan is a science journalist based in Minneapolis, MN. He’ll write about anything but specializes in biology, genetics, health, and the history of science. He has also written about Dairy Queens, church fish fries, and local politics. He has a cat, Masha, who will definitely appear in newsletters and posts.
…
Shi En Kim
Kim is a Malaysian-born, DC-based journalist whose writing spans the scientific gamut (please don’t make her pick a favorite beat). Outside of science writing, she dabbles in art, plans for backpacking trips faster she can go on them, and plots the next move against her imposter syndrome in a never-ending tussle.
…
Max G. Levy
Max is a science journalist whose favorite work tells the human stories behind discoveries in public health, climate change and tech. Max’s work appears in news outlets, magazines, and science videos on YouTube. He’s got sand in his hair as he writes this from his home in Los Angeles. If he goes a couple weeks without mentioning his late pet rats (R.I.P. Fiona, Syd, and Mouse) please call for help.
…
Storytelling grants and the National Geographic
This is the second item of interest from the SMCC March 26, 2024 notice,
National Geographic Storytelling Grants Submission deadline has been extended. New submission deadline: April 11, 2024 | 23:59 ET The grants fund individuals working on projects in science, conservation, storytelling, education, and technology that align with one or more of our focus areas …
I found the page where the grants are described to be confusing. First, storytelling grants are part of the National Geographic’s ‘Grants and Investments program known as ‘National Geographic Explorers’ and there are two levels of grant opportunities with ‘storytellers’ being at Level II.
If you scroll down the National Geographic Grants and Investments webpage about 80% of the way, you’ll find Additional Resources, which includes the Level II Grants Program Storytelling Application Template. Good luck!
Extra
In the next day or so (probably by March 28, 2024), you may be seeing some articles about moon-bound Canadian astronaut, Jeremy Hansen, He’s giving a virtual presentation at the University of British Columbia (UBC). From a March 26, 2024 UBC media advisory (received via email),
Hansen is one of four crew members for the Artemis II mission, which will send astronauts around the Moon on the first crewed flight of the SLS rocket and Orion spacecraft, no earlier than September 2025. He has lived in a cave underground and on the ocean floor in space mission simulations, and will be the first Canadian to participate in a lunar mission.
I don’t believe this event is open to the public, which is why I haven’t included details but you can be on the lookout for articles particularly in local (Vancouver, Canada) publications over the next few days.
It’s a bit disconcerting to think that one might be resurrected, in this case, digitally, but Dr Masaki Iwasaki has helpfully published a study on attitudes to digital cloning and resurrection consent, which could prove helpful when establishing one’s final wishes.
In a 2014 episode of sci-fi series Black Mirror, a grieving young widow reconnects with her dead husband using an app that trawls his social media history to mimic his online language, humor and personality. It works. She finds solace in the early interactions – but soon wants more.
Such a scenario is no longer fiction. In 2017, the company Eternime aimed to create an avatar of a dead person using their digital footprint, but this “Skype for the dead” didn’t catch on. The machine-learning and AI algorithms just weren’t ready for it. Neither were we.
Now, in 2024, amid exploding use of Chat GPT-like programs, similar efforts are on the way. But should digital resurrection be allowed at all? And are we prepared for the legal battles over what constitutes consent?
In a study published in the Asian Journal of Law and Economics, Dr Masaki Iwasaki of Harvard Law School and currently an assistant professor at Seoul National University, explores how the deceased’s consent (or otherwise) affects attitudes to digital resurrection.
US adults were presented with scenarios where a woman in her 20s dies in a car accident. A company offers to bring a digital version of her back, but her consent is, at first, ambiguous. What should her friends decide?
Two options – one where the deceased has consented to digital resurrection and another where she hasn’t – were read by participants at random. They then answered questions about the social acceptability of bringing her back on a five-point rating scale, considering other factors such as ethics and privacy concerns.
Results showed that expressed consent shifted acceptability two points higher compared to dissent. “Although I expected societal acceptability for digital resurrection to be higher when consent was expressed, the stark difference in acceptance rates – 58% for consent versus 3% for dissent – was surprising,” says Iwasaki. “This highlights the crucial role of the deceased’s wishes in shaping public opinion on digital resurrection.”
In fact, 59% of respondents disagreed with their own digital resurrection, and around 40% of respondents did not find any kind of digital resurrection socially acceptable, even with expressed consent. “While the will of the deceased is important in determining the societal acceptability of digital resurrection, other factors such as ethical concerns about life and death, along with general apprehension towards new technology are also significant,” says Iwasaki.
The results reflect a discrepancy between existing law and public sentiment. People’s general feelings – that the dead’s wishes should be respected – are actually not protected in most countries. The digitally recreated John Lennon in the film Forrest Gump, or animated hologram of Amy Winehouse reveal the ‘rights’ of the dead are easily overridden by those in the land of the living.
So, is your digital destiny something to consider when writing your will? It probably should be but in the current absence of clear legal regulations on the subject, the effectiveness of documenting your wishes in such a way is uncertain. For a start, how such directives are respected varies by legal jurisdiction. “But for those with strong preferences documenting their wishes could be meaningful,” says Iwasaki. “At a minimum, it serves as a clear communication of one’s will to family and associates, and may be considered when legal foundations are better established in the future.”
It’s certainly a conversation worth having now. Many generative AI chatbot services, such as like Replika (“The AI companion who cares”) and Project December (“Simulate the dead”) already enable conversations with chatbots replicating real people’s personalities. The service ‘You, Only Virtual’ (YOV) allows users to upload someone’s text messages, emails and voice conversations to create a ‘versona’ chatbot. And, in 2020, Microsoft obtained a patent to create chatbots from text, voice and image data for living people as well as for historical figures and fictional characters, with the option of rendering in 2D or 3D.
Iwasaki says he’ll investigate this and the digital resurrection of celebrities in future research. “It’s necessary first to discuss what rights should be protected, to what extent, then create rules accordingly,” he explains. “My research, building upon prior discussions in the field, argues that the opt-in rule requiring the deceased’s consent for digital resurrection might be one way to protect their rights.”
There is a link to the study in the press release above but this includes a citation, of sorts,
“Geist’s handmade robots made movements as simple as a ping-pong table flapping or coiling up to shoot, but the contact microphones and sound processing documented a percussive, electro, mini symphony.” – Austin Chronicle
There’s mystery in Geist’s music. It’s heady, ASMR-infused dance music — there’s something special happening here, but it’s not immediately clear what.” –– Engadget
“For Geist, the instruments represent not just a new way to make music, but a new way to experience it. The instruments each have a visual component, which makes it possible to watch the sounds as Geist creates them.” – Wired
The performance is fascinating and bewildering, but the music itself provokes one to want to dance in a dimly lit nightclub.” – MixMag
“It doesn’t get geekier than this” – New York Times.
“These robots play electrifying techno music.”– CNET
German sonic artist Moritz Simon Geist will showcase his latest work “Don’t Look At Me” at the Central Presbyterian Church March 15th. With his new robotic instrument Geist is presenting a contemplative ambient performance around the themes attention economy, spatial sound and sine waves. “Don’t Look at Me” was developed as an interactive installation in 2023 in South Korea and uses resonator tubes, light, and vibrato elements to create a fascinating ever-changing soundscape. For the SXSW event, Geist is showcasing his latest compositions with this instrument.
Moritz is returning to SXSW 2024 with a handful of performances and robotic interventions. His works and performances revolve around the questions: How do machines, algorithms and humans interact? How can we find a playful way to interact with non-human music players? And can robots play techno?
Moritz and his team have been developing sound machines and kinetic installations for more than 10 years already, and his works and performances have been shown at festivals and stages around the world. For this SXSW, Moritz is bringing both performances for several techno shows as well as a contemplative ambient show at the Central Presbyterian Church on March 15th [2024]. Here he will present compositions for his latest work “Don’t Look At Me”.
Geist is well known for his performances and self-developed instruments using robots and mechanics as the main sound source. His works have been shown internationally and have been awarded numerous awards in the last years.
Of his return to SXSW, Moritz says, “SXSW 2024 is only the second time I’m playing ‘Don’t Look at Me’ with my new robotic instrument! Playing with a new instrument this complex is always like this first walk outside with a toddler: You never know where you end up: manic laughter at the playground or existential crying in the supermarket.” Regarding his ongoing fascination with machines as instruments, Moritz muses, “When I was younger, I played in a punk rock band, but at some point, I got really annoyed by my fellow musicians, so I swore to myself that I would never play with human musicians again. Jokes aside, I think robotics is a wonderful tool to give a body back to the normally electronically generated sound of techno. The main reason why I’m using robotics as a musical instrument is that the computer is, in my opinion, not the best tool for creating electronic sounds.”
Should you be considering a ticket purchase for the April 15 – 19, 2024 TED event in Vancouver, the cheap ($6250 [USD?]) seats are sold out. Tickets at the next level up are $12,500 and after that, they are $25,000. Should you have more money to burn, you are of course free to become a patron.
A look at the 2024 list of speakers will tell you it is an eclectic list with a significant proportion of speakers focused on the topic of artificial intelligence/robotics.
The three speakers being highlighted here are not focused on artificial intelligence/robotics and have nothing in common with each other (topic wise).
First up, Bill Ackman, a very, very wealthy man, has a messy backstory. Here’s the short description followed by the long one,
Bill Ackman
Founder and CEO, Pershing Square Capital Management
TALK TOPIC
The activist investor playbook (in conversation with Alison Taylor)
Bill Ackman is founder and CEO of the hedge fund Pershing Square Capital Management and a storied activist investor. He is the chairman of Howard Hughes Holdings, a real estate development and management company based in Texas, and a member of the board of Universal Music Group. He is also the co-trustee of The Pershing Square Foundation, a family foundation supporting those tackling important social issues worldwide. At TED2024, Ackman will be interviewed by business professor Alison Taylor.
Mr. Ackman is not entirely self-made, from his Wikipedia entry, Note: Links have been removed,
…
Ackman was raised in Chappaqua, New York, the son of Ronnie I. (née Posner) and Lawrence David Ackman, the former chairman of a New York real estate financing firm, Ackman-Ziff Real Estate Group. [emphases mine] [10][11][12] He is of Ashkenazi Jewish descent.[13][14][15] In 1988, he received a Bachelor of Arts degree magna cum laude in social studies from Harvard College. His thesis was titled Scaling the Ivy Wall: The Jewish and Asian American Experience in Harvard Admissions.[16] In 1992, he received a Master of Business Administration degree from Harvard Business School.[17]
As for the messiness, there’s this from from his Wikipedia entry, Note 1: Links have been removed, Note 2: All emphases are mine,
…
In October 2023, following the onset of the 2023 Israel–Hamas war after the October 7 attack, several Harvard undergraduate student groups signed a letter condemning the Israeli state. The statement held the “Israeli regime entirely responsible for all unfolding violence,” declared that millions of Palestinians in Gaza have been “forced to live in an open-air prison,” and called on Harvard to “take action to stop the ongoing annihilation of Palestinians.”
In response, Ackman called for the publication of the names of all students involved in signing the letter so that he could ensure his company and others do not “inadvertently hire” any of the signatories. Ackman posted, “One should not be able to hide behind a corporate shield when issuing statements supporting the actions of terrorists,” and the names “should be made public so their views are publicly known”.[83] Ackman’s stance was supported by other CEOs such as Jonathan Neman, David Duel and Jake Wurzak.[84] Former Harvard president Lawrence Summers, though agreeing with Ackman on the need to look at employees’ political views, called Ackman’s request for a list of names “the stuff of Joe McCarthy”.[85]
In November 2023, Ackman defended Elon Musk after the latter expressed agreement with a user who asserted that “Jewish communities” supported “hordes of minorities flooding their country” and pushed “dialectical hatred against whites”, describing it as “shoot from the hip commentary”.[86][87]
Ackman also engaged in a campaign to remove Claudine Gay from her position as Harvard’s president. He argued that her response to antisemitism was insufficient and amplified allegations by conservative media that she engaged in plagiarism.[88][89]
On January 3, 2024, Business Insider published an article alleging that Ackman’s wife, Neri Oxman*, plagiarized portions of her dissertation. A day after the article’s publication, Oxman apologized for plagiarizing portions of her dissertation.[90][91]Ackman, in response to the article, pledged to conduct a plagiarism review of all MIT [Massachusetts Institute of Technology] faculty, including MIT’s president, Sally Kornbluth, who, alongside Gay, attended a congressional hearing on antisemitism in higher education.[90]
…
In 2018, Ackman became engaged to Neri Oxman.[94] In January 2019, Oxman and Ackman married at the Central Synagogue in Manhattan,[13] and they had their first child together in spring 2019.[95] In August 2019, Ackman wrote to MIT Media Lab director Joi Ito to discourage him from mentioning Oxman when discussing convicted sex offender Jeffrey Epstein, who had donated $125,000 to Oxman’s lab.[96]
…
*There are more complications where Neri Oxman is concerned. She is an Israeli-American who came to the US in 2005 where she commenced PhD studies at MIT. After graduation she became a professor at MIT, a position she has left to found Oxman Architects in 2020. Despite the company name, the business seems more focused on art installations and experimental work. (sourced from Oxman’s Wikipedia entry)*
I wonder how Mr. Ackman characterizes the difference between activism and actions, which result in the destruction of other people’s careers because you disagree with them.
Ackman’s interviewer, Alison Taylor is an interesting choice given that she is a business professor at New York University’s Stern School of Business and author of a February 6, 2024 article (Corporate Advocacy in a Time of Social Outrage; Businesses can’t weigh in on every issue that employees care about. But they can create a culture of open dialogue and ethical transparency [emphasis mine]) for the Harvard Business Review, The article is excerpted from Taylor’s book, “Higher Ground: How Business Can Do the Right Thing in a Turbulent World,” published by Harvard Business Review Press, Feb. 13 2024.
This talk looks like an attempt to rehabilitate Mr. Ackman’s reputation while giving Ms. Taylor publicity for her newly published book in an environment where neither is likely to be strongly challenged.
The two speakers I’m most excited about are Tammy Ma, fusion physicist, and Brian Stokes Mitchell, actor and singer.
As there is a local company known as General Fusion, the topic of fusion energy has been covered here a number of times including a relevant to Ms. Ma’s TED appearance December 13, 2022 posting, “US announces fusion energy breakthrough.”
Tammy Ma is the lead for the Inertial Fusion Energy Initiative at Lawrence Livermore National Laboratory, where she creates miniature stars in order to develop ways to harness their power as a clean, limitless energy source for the future. She was a member of the team at the National Ignition Facility that achieved fusion ignition in December 2022 — a reaction that, for the first time in history, released more energy than it consumed. [emphasis mine] A fellow of the American Physical Society, she serves on the Fusion Energy Sciences Advisory Committee, advising the US Department of Energy’s Office of Science on issues related to fusion energy and plasma research.
Brian Stokes Mitchell is a Tony-winning actor, singer and music producer. A veteran of 11 Broadway shows and a member of the Theatre Hall of Fame, he has performed iconic roles including Frasier’s snarky upstairs neighbor Cam, Hillary’s bungie-jumping boyfriend on the Fresh Prince of Bel-Air, The Prince of Egypt (singing “Through Heaven’s Eyes”) and, most recently, Stanley Townsend in the 2024 feature film Shirley with Regina King. He has performed twice at the White House, serves on the board of Americans for the Arts and is one of the founding members of Black Theatre United.
Stokes Mitchell performed at the Library of Congress Gershwin Prize for Popular Song ceremony in 2017 when Tony Bennett was the honoree. The performances were top notch but something happened when Stokes Mitchell took the stage for his first number. The audience was electrified as was every performer who came after him, some of them giving their second performance of the evening. There is no guarantee that Mr. Stoke Mitchell can do that at his 2024 TED talk but that is the blessing and the curse of live performances.
The Broad Institute of MIT [Massachusetts Institute of Technology] and Harvard is now accepting applications for its 2024 Media Boot Camp.
This annual program connects health/science journalists and editors with faculty from the Broad Institute, Massachusetts Institute of Technology, Harvard University, and Harvard’s teaching hospitals for a two-day event exploring the latest advances in genomics and biomedicine. Journalists will explore possible future storylines, gain fundamental background knowledge, and build relationships with researchers. The program format includes presentations, discussions, and lab tours.
The 2024 Media Boot Camp will take place in person at the Broad Institute in Cambridge, MA on Thursday, May 16 and Friday, May 17 (with an evening welcome reception on Wednesday, May 15).
APPLICATION DEADLINE IS FRIDAY, MARCH 22 (5:00 PM US EASTERN TIME).
2024 Boot Camp topics include:
Gene editing
New approaches for therapeutic delivery
Cancer biology, drug development
Data sciences, machine learning
Neurobiology (stem cell models of psychiatric disorders)
This Media Boot Camp is an educational offering. All presentations are on-background.
Hotel accommodations and meals during the program will be provided by the Broad Institute. Attendees must cover travel costs to and from Boston.
Application Process
By Friday, March 22 [2024] (5:00 PM US Eastern time [2 pm PT]), please send at least one paragraph describing your interest in the program and how you hope it will benefit your reporting, as well as three recent news clips, to David Cameron, Director of External Communications, dcameron@broadinstitute.org
It seems chimeras are of more interest these days. In all likelihood that has something to do with the fellow who received a transplant of a pig’s heart in January 2022 (he died in March 2022).
For those who aren’t familiar with the term, a chimera is an entity with two different DNA (deoxyribonucleic acid) identities. In short, if you get a DNA sample from the heart, it’s different from a DNA sample obtained from a cheek swab. This contrasts with a hybrid such as a mule (donkey/horse) whose DNA samples show a consisted identity throughout its body.
A new report on the ethics of crossing species boundaries by inserting human cells into nonhuman animals – research surrounded by debate – makes recommendations clarifying the ethical issues and calling for improved oversight of this work.
The report, “Creating Chimeric Animals — Seeking Clarity On Ethics and Oversight,” was developed by an interdisciplinary team, with funding from the National Institutes of Health. Principal investigators are Josephine Johnston and Karen Maschke, research scholars at The Hastings Center, and Insoo Hyun, director of the Center for Life Sciences and Public Learning at the Museum of Life Sciences in Boston, formerly of Case Western Reserve University.
Advances in human stem cell science and gene editing enable scientists to insert human cells more extensively and precisely into nonhuman animals, creating “chimeric” animals, embryos, and other organisms that contain a mix of human and nonhuman cells.
Many people hope that this research will yield enormous benefits, including better models of human disease, inexpensive sources of human eggs and embryos for research, and sources of tissues and organs suitable for transplantation into humans.
But there are ethical concerns about this type of research, which raise questions such as whether the moral status of nonhuman animals is altered by the insertion of human stem cells, whether these studies should be subject to additional prohibitions or oversight, and whether this kind of research should be done at all.
The report found that:
Animal welfare is a primary ethical issue and should be a focus of ethical and policy analysis as well as the governance and oversight of chimeric research.
Chimeric studies raise the possibility of unique or novel harms resulting from the insertion and development of human stem cells in nonhuman animals, particularly when those cells develop in the brain or central nervous system.
Oversight and governance of chimeric research are siloed, and public communication is minimal. Public communication should be improved, communication between the different committees involved in oversight at each institution should be enhanced, and a national mechanism created for those involved in oversight of these studies.
Scientists, journalists, bioethicists, and others writing about chimeric research should use precise and accessible language that clarifies rather than obscures the ethical issues at stake. The terms “chimera,” which in Greek mythology refers to a fire-breathing monster, and “humanization” are examples of ethically laden, or overly broad language to be avoided.
The Research Team
The Hastings Center
• Josephine Johnston • Karen J. Maschke • Carolyn P. Neuhaus • Margaret M. Matthews • Isabel Bolo
Case Western Reserve University • Insoo Hyun (now at Museum of Science, Boston) • Patricia Marshall • Kaitlynn P. Craig
The Work Group
• Kara Drolet, Oregon Health & Science University • Henry T. Greely, Stanford University • Lori R. Hill, MD Anderson Cancer Center • Amy Hinterberger, King’s College London • Elisa A. Hurley, Public Responsibility in Medicine and Research • Robert Kesterson, University of Alabama at Birmingham • Jonathan Kimmelman, McGill University • Nancy M. P. King, Wake Forest University School of Medicine • Geoffrey Lomax, California Institute for Regenerative Medicine • Melissa J. Lopes, Harvard University Embryonic Stem Cell Research Oversight Committee • P. Pearl O’Rourke, Harvard Medical School • Brendan Parent, NYU Grossman School of Medicine • Steven Peckman, University of California, Los Angeles • Monika Piotrowska, State University of New York at Albany • May Schwarz, The Salk Institute for Biological Studies • Jeff Sebo, New York University • Chris Stodgell, University of Rochester • Robert Streiffer, University of Wisconsin-Madison • Lorenz Studer, Memorial Sloan Kettering Cancer Center • Amy Wilkerson, The Rockefeller University
Here’s a link to and a citation for the report,
Creating Chimeric Animals: Seeking Clarity on Ethics and Oversight edited by Karen J. Maschke, Margaret M. Matthews, Kaitlynn P. Craig, Carolyn P. Neuhaus, Insoo Hyun, Josephine Johnston, The Hastings Center Report Volume 52, Issue S2 (Special Report), November‐December 2022 First Published: 09 December 2022
Microprocessors in smartphones, computers, and data centers process information by manipulating electrons through solid semiconductors but our brains have a different system. They rely on the manipulation of ions in liquid to process information.
Inspired by the brain, researchers have long been seeking to develop ‘ionics’ in an aqueous solution. While ions in water move slower than electrons in semiconductors, scientists think the diversity of ionic species with different physical and chemical properties could be harnessed for richer and more diverse information processing.
Ionic computing, however, is still in its early days. To date, labs have only developed individual ionic devices such as ionic diodes and transistors, but no one has put many such devices together into a more complex circuit for computing — until now.
A team of researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), in collaboration with DNA Script, a biotech startup, have developed an ionic circuit comprising hundreds of ionic transistors and performed a core process of neural net computing.
The researchers began by building a new type of ionic transistor from a technique they recently pioneered. The transistor consists of an aqueous solution of quinone molecules, interfaced with two concentric ring electrodes with a center disk electrode, like a bullseye. The two ring electrodes electrochemically lower and tune the local pH around the center disk by producing and trapping hydrogen ions. A voltage applied to the center disk causes an electrochemical reaction to generate an ionic current from the disk into the water. The reaction rate can be sped up or down –– increasing or decreasing the ionic current — by tuning the local pH. In other words, the pH controls, or gates, the disk’s ionic current in the aqueous solution, creating an ionic counterpart of the electronic transistor.
They then engineered the pH-gated ionic transistor in such a way that the disk current is an arithmetic multiplication of the disk voltage and a “weight” parameter representing the local pH gating the transistor. They organized these transistors into a 16 × 16 array to expand the analog arithmetic multiplication of individual transistors into an analog matrix multiplication, with the array of local pH values serving as a weight matrix encountered in neural networks.
“Matrix multiplication is the most prevalent calculation in neural networks for artificial intelligence,” said Woo-Bin Jung, a postdoctoral fellow at SEAS and the first author of the paper. “Our ionic circuit performs the matrix multiplication in water in an analog manner that is based fully on electrochemical machinery.”
“Microprocessors manipulate electrons in a digital fashion to perform matrix multiplication,” said Donhee Ham, the Gordon McKay Professor of Electrical Engineering and Applied Physics at SEAS and the senior author of the paper. “While our ionic circuit cannot be as fast or accurate as the digital microprocessors, the electrochemical matrix multiplication in water is charming in its own right, and has a potential to be energy efficient.”
Now, the team looks to enrich the chemical complexity of the system.
“So far, we have used only 3 to 4 ionic species, such as hydrogen and quinone ions, to enable the gating and ionic transport in the aqueous ionic transistor,” said Jung. “It will be very interesting to employ more diverse ionic species and to see how we can exploit them to make rich the contents of information to be processed.”
The research was co-authored by Han Sae Jung, Jun Wang, Henry Hinton, Maxime Fournier, Adrian Horgan, Xavier Godron, and Robert Nicol. It was supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), under grant 2019-19081900002.
Here’s a link to and a citation for the paper,
An Aqueous Analog MAC Machine by Woo-Bin Jung, Han Sae Jung, Jun Wang, Henry Hinton, Maxime Fournier, Adrian Horgan, Xavier Godron, Robert Nicol, Donhee Ham. Advanced Materials DOI: https://doi.org/10.1002/adma.202205096 First published online: 23 August 2022
June 2022 was the 10th anniversary of the publication of a study the paved the way for CRISPR-Cas9 gene editing and Sophie Fessl’s June 28, 2022 article for The Scientist offers a brief history (Note: Links have been removed),
Ten years ago, Emmanuelle Charpentier and Jennifer Doudna published the study that paved the way for a new kind of genome editing: the suite of technologies now known as CRISPR. Writing in [the journal] Science, they adapted an RNA-mediated bacterial immune defense into a targeted DNA-altering system. “Our study . . . highlights the potential to exploit the system for RNA-programmable genome editing,” they conclude in the abstract of their paper—a potential that, in the intervening years, transformed the life sciences.
From gene drives to screens, and diagnostics to therapeutics, CRISPR nucleic acids and the Cas enzymes with which they’re frequently paired have revolutionized how scientists tinker with DNA and RNA. … altering the code of life with CRISPR has been marred by ethical concerns. Perhaps the most prominent example was when Chinese scientist He Jiankui created the first gene edited babies using CRISPR/Cas9 genome editing. Doudna condemned Jiankui’s work, for which he was jailed, as “risky and medically unnecessary” and a “shocking reminder of the scientific and ethical challenges raised by this powerful technology.”
There’s also the fact that legal battles over who gets to claim ownership of the system’s many applications have persisted almost as long as the technology has been around. Both Doudna and Charpentier’s teams from the University of California, Berkeley, and the University of Vienna and a team led by the Broad Institute’s Feng Zhang claim to be the first to have adapted CRISPR-Cas9 for gene editing in complex cells (eukaryotes). Patent offices in different countries have reached varying decisions, but in the US, the latest rulings say that the Broad Institute of MIT [Massachusetts Institute of Technology] and Harvard retains intellectual property of using CRISPR-Cas9 in eukaryotes, while Emmanuelle Charpentier, the University of California, and the University of Vienna maintain their original patent over using CRISPR-Cas9 for editing in vitro and in prokaryotes.
Still, despite the controversies, the technique continues to be explored academically and commercially for everything from gene therapy to crop improvement. Here’s a look at seven different ways scientists have utilized CRISPR.
…
Fessl goes on to give a brief overview of CRISPR and gene drives, genetic screens, diagnostics, including COVID-19 tests, gene therapy, therapeutics, crop and livestock improvement, and basic research.
An anthropologist visits the frontiers of genetics, medicine, and technology to ask: Whose values are guiding gene editing experiments? And what does this new era of scientific inquiry mean for the future of the human species?
“That rare kind of scholarship that is also a page-turner.” —Britt Wray, author of Rise of the Necrofauna
At a conference in Hong Kong in November 2018, Dr. He Jiankui announced that he had created the first genetically modified babies—twin girls named Lulu and Nana—sending shockwaves around the world. A year later, a Chinese court sentenced Dr. He to three years in prison for “illegal medical practice.”
As scientists elsewhere start to catch up with China’s vast genetic research program, gene editing is fueling an innovation economy that threatens to widen racial and economic inequality. Fundamental questions about science, health, and social justice are at stake: Who gets access to gene editing technologies? As countries loosen regulations around the globe, from the U.S. to Indonesia, can we shape research agendas to promote an ethical and fair society?
Eben Kirksey takes us on a groundbreaking journey to meet the key scientists, lobbyists, and entrepreneurs who are bringing cutting-edge genetic engineering tools like CRISPR—created by Nobel Prize-winning biochemists Jennifer Doudna and Emmanuelle Charpentier—to your local clinic. He also ventures beyond the scientific echo chamber, talking to disabled scholars, doctors, hackers, chronically-ill patients, and activists who have alternative visions of a genetically modified future for humanity.
One of the world’s leading experts on genetics unravels one of the most important breakthroughs in modern science and medicine.
If our genes are, to a great extent, our destiny, then what would happen if mankind could engineer and alter the very essence of our DNA coding? Millions might be spared the devastating effects of hereditary disease or the challenges of disability, whether it was the pain of sickle-cell anemia to the ravages of Huntington’s disease.
But this power to “play God” also raises major ethical questions and poses threats for potential misuse. For decades, these questions have lived exclusively in the realm of science fiction, but as Kevin Davies powerfully reveals in his new book, this is all about to change.
Engrossing and page-turning, Editing Humanity takes readers inside the fascinating world of a new gene editing technology called CRISPR, a high-powered genetic toolkit that enables scientists to not only engineer but to edit the DNA of any organism down to the individual building blocks of the genetic code.
Davies introduces readers to arguably the most profound scientific breakthrough of our time. He tracks the scientists on the front lines of its research to the patients whose powerful stories bring the narrative movingly to human scale.
Though the birth of the “CRISPR babies” in China made international news, there is much more to the story of CRISPR than headlines seemingly ripped from science fiction. In Editing Humanity, Davies sheds light on the implications that this new technology can have on our everyday lives and in the lives of generations to come.
…
Kevin Davies is the executive editor of The CRISPR Journal and the founding editor of Nature Genetics. He holds an MA in biochemistry from the University of Oxford and a PhD in molecular genetics from the University of London. He is the author of Cracking the Genome,The $1,000 Genome, and co-authored a new edition of DNA: The Story of the Genetic Revolution with Nobel Laureate James D. Watson and Andrew Berry. In 2017, Kevin was selected for a Guggenheim Fellowship in science writing.
I’ve read both books and while some of the same ground is covered, the perspectives diverge somewhat. Both authors offer a more nuanced discussion of the issues than was the case in the original reporting about Dr. He’s work.
This image certainly challenges any ideas I have about what Lego looks like. It seems they see things differently at the Massachusetts Institute of Technology (MIT). From a June 13, 2022 MIT news release (also on EurekAlert),
Imagine a more sustainable future, where cellphones, smartwatches, and other wearable devices don’t have to be shelved or discarded for a newer model. Instead, they could be upgraded with the latest sensors and processors that would snap onto a device’s internal chip — like LEGO bricks incorporated into an existing build. Such reconfigurable chipware could keep devices up to date while reducing our electronic waste.
Now MIT engineers have taken a step toward that modular vision with a LEGO-like design for a stackable, reconfigurable artificial intelligence chip.
The design comprises alternating layers of sensing and processing elements, along with light-emitting diodes (LED) that allow for the chip’s layers to communicate optically. Other modular chip designs employ conventional wiring to relay signals between layers. Such intricate connections are difficult if not impossible to sever and rewire, making such stackable designs not reconfigurable.
The MIT design uses light, rather than physical wires, to transmit information through the chip. The chip can therefore be reconfigured, with layers that can be swapped out or stacked on, for instance to add new sensors or updated processors.
“You can add as many computing layers and sensors as you want, such as for light, pressure, and even smell,” says MIT postdoc Jihoon Kang. “We call this a LEGO-like reconfigurable AI chip because it has unlimited expandability depending on the combination of layers.”
The researchers are eager to apply the design to edge computing devices — self-sufficient sensors and other electronics that work independently from any central or distributed resources such as supercomputers or cloud-based computing.
“As we enter the era of the internet of things based on sensor networks, demand for multifunctioning edge-computing devices will expand dramatically,” says Jeehwan Kim, associate professor of mechanical engineering at MIT. “Our proposed hardware architecture will provide high versatility of edge computing in the future.”
The team’s results are published today in Nature Electronics. In addition to Kim and Kang, MIT authors include co-first authors Chanyeol Choi, Hyunseok Kim, and Min-Kyu Song, and contributing authors Hanwool Yeon, Celesta Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Subeen Pang, Sang-Hoon Bae, Hun S. Kum, and Peng Lin, along with collaborators from Harvard University, Tsinghua University, Zhejiang University, and elsewhere.
Lighting the way
The team’s design is currently configured to carry out basic image-recognition tasks. It does so via a layering of image sensors, LEDs, and processors made from artificial synapses — arrays of memory resistors, or “memristors,” that the team previously developed, which together function as a physical neural network, or “brain-on-a-chip.” Each array can be trained to process and classify signals directly on a chip, without the need for external software or an Internet connection.
In their new chip design, the researchers paired image sensors with artificial synapse arrays, each of which they trained to recognize certain letters — in this case, M, I, and T. While a conventional approach would be to relay a sensor’s signals to a processor via physical wires, the team instead fabricated an optical system between each sensor and artificial synapse array to enable communication between the layers, without requiring a physical connection.
“Other chips are physically wired through metal, which makes them hard to rewire and redesign, so you’d need to make a new chip if you wanted to add any new function,” says MIT postdoc Hyunseok Kim. “We replaced that physical wire connection with an optical communication system, which gives us the freedom to stack and add chips the way we want.”
The team’s optical communication system consists of paired photodetectors and LEDs, each patterned with tiny pixels. Photodetectors constitute an image sensor for receiving data, and LEDs to transmit data to the next layer. As a signal (for instance an image of a letter) reaches the image sensor, the image’s light pattern encodes a certain configuration of LED pixels, which in turn stimulates another layer of photodetectors, along with an artificial synapse array, which classifies the signal based on the pattern and strength of the incoming LED light.
Stacking up
The team fabricated a single chip, with a computing core measuring about 4 square millimeters, or about the size of a piece of confetti. The chip is stacked with three image recognition “blocks,” each comprising an image sensor, optical communication layer, and artificial synapse array for classifying one of three letters, M, I, or T. They then shone a pixellated image of random letters onto the chip and measured the electrical current that each neural network array produced in response. (The larger the current, the larger the chance that the image is indeed the letter that the particular array is trained to recognize.)
The team found that the chip correctly classified clear images of each letter, but it was less able to distinguish between blurry images, for instance between I and T. However, the researchers were able to quickly swap out the chip’s processing layer for a better “denoising” processor, and found the chip then accurately identified the images.
“We showed stackability, replaceability, and the ability to insert a new function into the chip,” notes MIT postdoc Min-Kyu Song.
The researchers plan to add more sensing and processing capabilities to the chip, and they envision the applications to be boundless.
“We can add layers to a cellphone’s camera so it could recognize more complex images, or makes these into healthcare monitors that can be embedded in wearable electronic skin,” offers Choi, who along with Kim previously developed a “smart” skin for monitoring vital signs.
Another idea, he adds, is for modular chips, built into electronics, that consumers can choose to build up with the latest sensor and processor “bricks.”
“We can make a general chip platform, and each layer could be sold separately like a video game,” Jeehwan Kim says. “We could make different types of neural networks, like for image or voice recognition, and let the customer choose what they want, and add to an existing chip like a LEGO.”
This research was supported, in part, by the Ministry of Trade, Industry, and Energy (MOTIE) from South Korea; the Korea Institute of Science and Technology (KIST); and the Samsung Global Research Outreach Program.
Here’s a link to and a citation for the paper,
Reconfigurable heterogeneous integration using stackable chips with embedded artificial intelligence by Chanyeol Choi, Hyunseok Kim, Ji-Hoon Kang, Min-Kyu Song, Hanwool Yeon, Celesta S. Chang, Jun Min Suh, Jiho Shin, Kuangye Lu, Bo-In Park, Yeongin Kim, Han Eol Lee, Doyoon Lee, Jaeyong Lee, Ikbeom Jang, Subeen Pang, Kanghyun Ryu, Sang-Hoon Bae, Yifan Nie, Hyun S. Kum, Min-Chul Park, Suyoun Lee, Hyung-Jun Kim, Huaqiang Wu, Peng Lin & Jeehwan Kim. Nature Electronics volume 5, pages 386–393 (2022) 05 May 2022 Issue Date: June 2022 Published: 13 June 2022 DOI: https://doi.org/10.1038/s41928-022-00778-y