Category Archives: robots

Chatbot with expertise in nanomaterials

This December 1, 2023 news item on phys.org starts with a story,

A researcher has just finished writing a scientific paper. She knows her work could benefit from another perspective. Did she overlook something? Or perhaps there’s an application of her research she hadn’t thought of. A second set of eyes would be great, but even the friendliest of collaborators might not be able to spare the time to read all the required background publications to catch up.

Kevin Yager—leader of the electronic nanomaterials group at the Center for Functional Nanomaterials (CFN), a U.S. Department of Energy (DOE) Office of Science User Facility at DOE’s Brookhaven National Laboratory—has imagined how recent advances in artificial intelligence (AI) and machine learning (ML) could aid scientific brainstorming and ideation. To accomplish this, he has developed a chatbot with knowledge in the kinds of science he’s been engaged in.

A December 1, 2023 DOE/Brookhaven National Laboratory news release by Denise Yazak (also on EurekAlert), which originated the news item, describes a research project with a chatbot that has nanomaterial-specific knowledge, Note: Links have been removed,

Rapid advances in AI and ML have given way to programs that can generate creative text and useful software code. These general-purpose chatbots have recently captured the public imagination. Existing chatbots—based on large, diverse language models—lack detailed knowledge of scientific sub-domains. By leveraging a document-retrieval method, Yager’s bot is knowledgeable in areas of nanomaterial science that other bots are not. The details of this project and how other scientists can leverage this AI colleague for their own work have recently been published in Digital Discovery.

Rise of the Robots

“CFN has been looking into new ways to leverage AI/ML to accelerate nanomaterial discovery for a long time. Currently, it’s helping us quickly identify, catalog, and choose samples, automate experiments, control equipment, and discover new materials. Esther Tsai, a scientist in the electronic nanomaterials group at CFN, is developing an AI companion to help speed up materials research experiments at the National Synchrotron Light Source II (NSLS-II).” NSLS-II is another DOE Office of Science User Facility at Brookhaven Lab.

At CFN, there has been a lot of work on AI/ML that can help drive experiments through the use of automation, controls, robotics, and analysis, but having a program that was adept with scientific text was something that researchers hadn’t explored as deeply. Being able to quickly document, understand, and convey information about an experiment can help in a number of ways—from breaking down language barriers to saving time by summarizing larger pieces of work.

Watching Your Language

To build a specialized chatbot, the program required domain-specific text—language taken from areas the bot is intended to focus on. In this case, the text is scientific publications. Domain-specific text helps the AI model understand new terminology and definitions and introduces it to frontier scientific concepts. Most importantly, this curated set of documents enables the AI model to ground its reasoning using trusted facts.

To emulate natural human language, AI models are trained on existing text, enabling them to learn the structure of language, memorize various facts, and develop a primitive sort of reasoning. Rather than laboriously retrain the AI model on nanoscience text, Yager gave it the ability to look up relevant information in a curated set of publications. Providing it with a library of relevant data was only half of the battle. To use this text accurately and effectively, the bot would need a way to decipher the correct context.

“A challenge that’s common with language models is that sometimes they ‘hallucinate’ plausible sounding but untrue things,” explained Yager. “This has been a core issue to resolve for a chatbot used in research as opposed to one doing something like writing poetry. We don’t want it to fabricate facts or citations. This needed to be addressed. The solution for this was something we call ‘embedding,’ a way of categorizing and linking information quickly behind the scenes.”

Embedding is a process that transforms words and phrases into numerical values. The resulting “embedding vector” quantifies the meaning of the text. When a user asks the chatbot a question, it’s also sent to the ML embedding model to calculate its vector value. This vector is used to search through a pre-computed database of text chunks from scientific papers that were similarly embedded. The bot then uses text snippets it finds that are semantically related to the question to get a more complete understanding of the context.

The user’s query and the text snippets are combined into a “prompt” that is sent to a large language model, an expansive program that creates text modeled on natural human language, that generates the final response. The embedding ensures that the text being pulled is relevant in the context of the user’s question. By providing text chunks from the body of trusted documents, the chatbot generates answers that are factual and sourced.

“The program needs to be like a reference librarian,” said Yager. “It needs to heavily rely on the documents to provide sourced answers. It needs to be able to accurately interpret what people are asking and be able to effectively piece together the context of those questions to retrieve the most relevant information. While the responses may not be perfect yet, it’s already able to answer challenging questions and trigger some interesting thoughts while planning new projects and research.”

Bots Empowering Humans

CFN is developing AI/ML systems as tools that can liberate human researchers to work on more challenging and interesting problems and to get more out of their limited time while computers automate repetitive tasks in the background. There are still many unknowns about this new way of working, but these questions are the start of important discussions scientists are having right now to ensure AI/ML use is safe and ethical.

“There are a number of tasks that a domain-specific chatbot like this could clear from a scientist’s workload. Classifying and organizing documents, summarizing publications, pointing out relevant info, and getting up to speed in a new topical area are just a few potential applications,” remarked Yager. “I’m excited to see where all of this will go, though. We never could have imagined where we are now three years ago, and I’m looking forward to where we’ll be three years from now.”

For researchers interested in trying this software out for themselves, the source code for CFN’s chatbot and associated tools can be found in this github repository.

Brookhaven National Laboratory is supported by the Office of Science of the U.S. Department of Energy. The Office of Science is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit science.energy.gov.

Here’s a link to and a citation for the paper,

Domain-specific chatbots for science using embeddings by Kevin G. Yager.
Digital Discovery, 2023,2, 1850-1861 DOI: https://doi.org/10.1039/D3DD00112A
First published 10 Oct 2023

This paper appears to be open access.

Living technology possibilities

Before launching into the possibilities, here are two descriptions of ‘living technology’ from the European Centre for Living Technology’s (ECLT) homepage,

Goals

Promote, carry out and coordinate research activities and the diffusion of scientific results in the field of living technology. The scientific areas for living technology are the nano-bio-technologies, self-organizing and evolving information and production technologies, and adaptive complex systems.

History

Founded in 2004 the European Centre for Living Technology is an international and interdisciplinary research centre established as an inter-university consortium, currently involving 18 European and extra-European institutional affiliates.

The Centre is devoted to the study of technologies that exhibit life-like properties including self-organization, adaptability and the capacity to evolve.

Despite the reference to “nano-bio-technologies,” this October 11, 2023 news item on ScienceDaily focuses on microscale living technology,

It is noIn a recent article in the high-profile journal “Advanced Materials,” researchers in Chemnitz show just how close and necessary the transition to sustainable living technology is, based on the morphogenesis of self-assembling microelectronic modules, strengthening the recent membership of Chemnitz University of Technology with the European Centre for Living Technology (ECLT) in Venice.

An October 11, 2023 Chemnitz University of Technology (Technische Universität Chemnitz; TU Chemnitz) press release (also on EurekAlert), which originated the news item, delves further into the topic, Note: Links have been removed,

It is now apparent that the mass-produced artefacts of technology in our increasingly densely populated world – whether electronic devices, cars, batteries, phones, household appliances, or industrial robots – are increasingly at odds with the sustainable bounded ecosystems achieved by living organisms based on cells over millions of years. Cells provide organisms with soft and sustainable environmental interactions with complete recycling of material components, except in a few notable cases like the creation of oxygen in the atmosphere, and of the fossil fuel reserves of oil and coal (as a result of missing biocatalysts). However, the fantastic information content of biological cells (gigabits of information in DNA alone) and the complexities of protein biochemistry for metabolism seem to place a cellular approach well beyond the current capabilities of technology, and prevent the development of intrinsically sustainable technology.

SMARTLETs: tiny shape-changing modules that collectively self-organize to larger more complex systems

A recent perspective review published in the very high impact journal Advanced Materials this month [October 2023] by researchers at the Research Center for Materials, Architectures and Integration of Nanomembranes (MAIN) of Chemnitz University of Technology, shows how a novel form of high-information-content Living Technology is now within reach, based on microrobotic electronic modules called SMARTLETs, which will soon be capable of self-assembling into complex artificial organisms. The research belongs to the new field of Microelectronic Morphogenesis, the creation of form under microelectronic control, and builds on work over the previous years at Chemnitz University of Technology to construct self-folding and self-locomoting thin film electronic modules, now carrying tiny silicon chiplets between the folds, for a massive increase in information processing capabilities. Sufficient information can now be stored in each module to encode not only complex functions but fabrication recipes (electronic genomes) for clean rooms to allow the modules to be copied and evolved like cells, but safely because of the gating of reproduction through human operated clean room facilities.

Electrical self-awareness during self-assembly

In addition, the chiplets can provide neuromorphic learning capabilities allowing them to improve performance during operation. A further key feature of the specific self-assembly of these modules, based on matching physical bar codes, is that electrical and fluidic connections can be achieved between modules. These can then be employed, to make the electronic chiplets on board “aware” of the state of assembly, and of potential errors, allowing them to direct repair, correct mis-assembly, induce disassembly and form collective functions spanning many modules. Such functions include extended communication (antennae), power harvesting and redistribution, remote sensing, material redistribution etc.

So why is this technology vital for sustainability?

The complete digital fab description for modules, for which actually only a limited number of types are required even for complex organisms, allows their material content, responsible originator and environmentally relevant exposure all to be read out. Prof. Dagmar Nuissl-Gesmann from the Law Department at Chemnitz University of Technology observes that “this fine-grained documentation of responsibility intrinsic down to microscopic scales will be a game changer in allowing legal assignment of environmental and social responsibility for our technical artefacts”.

Furthermore, the self-locomotion and self-assembly-disassembly capabilities allows the modules to self-sort for recycling. Modules can be regained, reused, reconfigured, and redeployed in different artificial organisms. If they are damaged, then their limited and documented types facilitate efficient custom recycling of materials with established and optimized protocols for these sorted and now identical entities. These capabilities complement the other more obvious advantages in terms of design development and reuse in this novel reconfigurable media. As Prof. Marlen Arnold, an expert in Sustainability of the Faculty of Economics and Business Administration observes, “Even at high volumes of deployment use, these properties could provide this technology with a hitherto unprecedented level of sustainability which would set the bar for future technologies to share our planet safely with us.”

Contribution to European Living Technology

This research is a first contribution of MAIN/Chemnitz University of Technology, as a new member of the European Centre for Living Technology ECLT, based in Venice,” says Prof. Oliver G. Schmidt, Scientific Director of the Research Center MAIN and adds that “It’s fantastic to see that our deep collaboration with ECLT is paying off so quickly with immediate transdisciplinary benefit for several scientific communities.” “Theoretical research at the ECLT has been urgently in need of novel technology systems able to implement the core properties of living systems.” comments Prof. John McCaskill, coauthor of the paper, and a grounding director of the ECLT in 2004.

Here’s a link to and a citation for the researchers’ perspective paper,

Microelectronic Morphogenesis: Smart Materials with Electronics Assembling into Artificial Organisms by John S. McCaskill, Daniil Karnaushenko, Minshen Zhu, Oliver G. Schmidt. Advanced Materials DOI: https://doi.org/10.1002/adma.202306344 First published: 09 October 2023

This paper is open access.

XoMotion, an exoskeleton developed in Canada causes commotion

I first stumbled across these researchers in 2016 when their project was known as “Wearable Lower Limb Anthropomorphic Exoskeleton (WLLAE).” In my January 20, 2016 posting, “#BCTECH: being at the Summit (Jan. 18-19, 2016),” an event put on by the province of British Columbia (BC, Canada) and the BC Innovation Council (BCIC), I visited a number of booths and talks at the #BC TECH Summit and had this to say about WLLAE,

“The Wearable Lower Limb Anthropomorphic Exoskeleton (WLLAE) – a lightweight, battery-operated and ergonomic robotic system to help those with mobility issues improve their lives. The exoskeleton features joints and links that correspond to those of a human body and sync with motion. SFU has designed, manufactured and tested a proof-of-concept prototype and the current version can mimic all the motions of hip joints.” The researchers (Siamak Arzanpour and Edward Park) pointed out that the ability to mimic all the motions of the hip is a big difference between their system and others which only allow the leg to move forward or back. They rushed the last couple of months to get this system ready for the Summit. In fact, they received their patent for the system the night before (Jan. 17, 2016) the Summit opened.

Unfortunately, there aren’t any pictures of WLLAE yet and the proof-of-concept version may differ significantly from the final version. This system could be used to help people regain movement (paralysis/frail seniors) and I believe there’s a possibility it could be used to enhance human performance (soldiers/athletes). The researchers still have some significant hoops to jump before getting to the human clinical trial stage. They need to refine their apparatus, ensure that it can be safely operated, and further develop the interface between human and machine. I believe WLLAE is considered a neuroprosthetic device. While it’s not a fake leg or arm, it enables movement (prosthetic) and it operates on brain waves (neuro). It’s a very exciting area of research, consequently, there’s a lot of international competition. [ETA January 3, 2024: I’m pretty sure I got the neuroprosthetic part wrong]

Time moved on and there was a name change and then there was this November 10, 2023 article by Jeremy Hainsworth for the Vancouver is Awesome website,

Vancouver-based fashion designer Chloe Angus thought she’d be in a wheelchair for the rest of her life after being diagnosed with an inoperable benign tumour in her spinal cord in 2015, resulting in permanent loss of mobility in her legs.

Now, however, she’s been using a state-of-the-art robotic exoskeleton known as XoMotion that can help physically disabled people self-balance, walk, sidestep, climb stairs and crouch.

“The first time I walked with the exoskeleton was a jaw-dropping experience,” said Angus. “After all these years, the exoskeleton let me stand up and walk on my own without falling. I felt like myself again.”

She added the exoskeleton has the potential to completely change the world for people with motion disabilities.

XoMotion is the result of a decade of research and the product of a Simon Fraser University spinoff company, Human in Motion Robotics (HMR) Inc. It’s the brainchild of professors Siamak Arzanpour and Edward Park.

Arzanpour and Park, both researchers in the Burnaby-based university’s School of Mechatronic Systems Engineering, began work on the device in 2014. They had a vision to enhance exoskeleton technology and empower individuals with mobility challenges to have more options for movement.

“We felt that there was an immediate need to help people with motion disabilities to walk again, with a full range of motion. At the time, exoskeletons could only walk forward. That was the only motion possible,” Arzanpour said.

A November 15, 2023 article (with an embedded video) by Amy Judd & Alissa Thibault for Global News (television) highlights Alexander’s story,

SFU professors Siamak Arzanpour and Edward Park wanted to help people with motion disabilities to walk freely, naturally and independently.

The exoskeleton [XoMotion] is now the most advanced of its kind in the world.

Chloe Angus, who lost her mobility in her legs in 2015, now works for the team.

She said the exoskeleton makes her feel like herself again.

She was diagnosed with an inoperable benign tumor in her spinal cord in 2015 which resulted in a sudden and permanent loss of mobility in her legs. At the time, doctors told Angus that she would need a wheelchair to move for the rest of her life.

Now she is part of the project and defying all odds.

“After all these years, the exoskeleton let me stand up and walk on my own without falling. I felt like myself again.”

There’s a bit more information in the November 8, 2023 Simon Fraser University (SFU) news release (which has the same embedded video as the Global News article) by Ray Sharma,

The state-of-the-art robotic exoskeleton known as XoMotion is the result of a decade of research and the product of an SFU spin off company, Human in Motion Robotics (HMR) Inc. The company has recently garnered millions in investment, an overseas partnership and a suite of new offices in Vancouver.

XoMotion allows individuals with mobility challenges to stand up and walk on their own, without additional support. When in use, XoMotion maintains its stability and simultaneously encompasses all the ranges of motion and degrees of freedom needed for users to self-balance, walk, sidestep, climb stairs, crouch, and more. 

Sensors within the lower-limb exoskeleton mimic the human body’s sense of logic to identify structures along the path, and in-turn, generate a fully balanced motion.

SFU professors Siamak Arzanpour and Edward Park, both researchers in the School of Mechatronic Systems Engineering, began work on the device in 2014 with a vision to enhance exoskeleton technology and empower individuals with mobility challenges to have more options for movement. 

“We felt that there was an immediate need to help people with motion disabilities to walk again, with a full range of motion. At the time, exoskeletons could only walk forward. That was the only motion possible,” says Arzanpour. 

The SFU professors, who first met in 2001 as graduate students at the University of Toronto, co-founded HMR in 2016, bringing together a group of students, end-users, therapists, and organizations to build upon the exoskeleton. Currently, 70 per cent of HMR employees are SFU graduates. 

In recent years, HMR has garnered multiple streams of investment, including a contract with Innovative Solutions Canada, and $10 million in funding during their Series A round in May, including an $8 million investment and strategic partnership from Beno TNR, a prominent Korean technology investment firm.

I decided to bring the embedded video here, it runs a little over 2 mins.,

You can find the Human in Robotics (HMR) website here.

FrogHeart’s 2023 comes to an end as 2024 comes into view

My personal theme for this last year (2023) and for the coming year was and is: catching up. On the plus side, my 2023 backlog (roughly six months) to be published was whittled down considerably. On the minus side, I start 2024 with a backlog of two to three months.

2023 on this blog had a lot in common with 2022 (see my December 31, 2022 posting), which may be due to what’s going on in the world of emerging science and technology or to my personal interests or possibly a bit of both. On to 2023 and a further blurring of boundaries:

Energy, computing and the environment

The argument against paper is that it uses up resources, it’s polluting, it’s affecting the environment, etc. Somehow the part where electricity which underpins so much of our ‘smart’ society does the same thing is left out of the discussion.

Neuromorphic (brainlike) computing and lower energy

Before launching into the stories about lowering energy usage, here’s an October 16, 2023 posting “The cost of building ChatGPT” that gives you some idea of the consequences of our insatiable desire for more computing and more ‘smart’ devices,

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

Why it matters: Microsoft’s five WDM [West Des Moines in Iowa] data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The focus is AI but it doesn’t take long to realize that all computing has energy and environmental costs. I have more about Ren’s work and about water shortages in the “The cost of building ChatGPT” posting.

This next posting would usually be included with my other art/sci postings but it touches on the issues. My October 13, 2023 posting about Toronto’s Art/Sci Salon events, in particular, there’s the Streaming Carbon Footprint event (just scroll down to the appropriate subhead). For the interested, I also found this 2022 paper “The Carbon Footprint of Streaming Media:; Problems, Calculations, Solutions” co-authored by one of the artist/researchers (Laura U. Marks, philosopher and scholar of new media and film at Simon Fraser University) who presented at the Toronto event.

I’m late to the party; Thomas Daigle posted a January 2, 2020 article about energy use and our appetite for computing and ‘smart’ devices for the Canadian Broadcasting Corporation’s online news,

For those of us binge-watching TV shows, installing new smartphone apps or sharing family photos on social media over the holidays, it may seem like an abstract predicament.

The gigabytes of data we’re using — although invisible — come at a significant cost to the environment. Some experts say it rivals that of the airline industry. 

And as more smart devices rely on data to operate (think internet-connected refrigerators or self-driving cars), their electricity demands are set to skyrocket.

“We are using an immense amount of energy to drive this data revolution,” said Jane Kearns, an environment and technology expert at MaRS Discovery District, an innovation hub in Toronto.

“It has real implications for our climate.”

Some good news

Researchers are working on ways to lower the energy and environmental costs, here’s a sampling of 2023 posts with an emphasis on brainlike computing that attest to it,

If there’s an industry that can make neuromorphic computing and energy savings sexy, it’s the automotive indusry,

On the energy front,

Most people are familiar with nuclear fission and some its attendant issues. There is an alternative nuclear energy, fusion, which is considered ‘green’ or greener anyway. General Fusion is a local (Vancouver area) company focused on developing fusion energy, alongside competitors from all over the planet.

Part of what makes fusion energy attractive is that salt water or sea water can be used in its production and, according to that December posting, there are other applications for salt water power,

More encouraging developments in environmental science

Again, this is a selection. You’ll find a number of nano cellulose research projects and a couple of seaweed projects (seaweed research seems to be of increasing interest).

All by myself (neuromorphic engineering)

Neuromorphic computing is a subset of neuromorphic engineering and I stumbled across an article that outlines the similarities and differences. My ‘summary’ of the main points and a link to the original article can be found here,

Oops! I did it again. More AI panic

I included an overview of the various ‘recent’ panics (in my May 25, 2023 posting below) along with a few other posts about concerning developments but it’s not all doom and gloom..

Governments have realized that regulation might be a good idea. The European Union has a n AI act, the UK held an AI Safety Summit in November 2023, the US has been discussing AI regulation with its various hearings, and there’s impending legislation in Canada (see professor and lawyer Michael Geist’s blog for more).

A long time coming, a nanomedicine comeuppance

Paolo Macchiarini is now infamous for his untested, dangerous approach to medicine. Like a lot of people, I was fooled too as you can see in my August 2, 2011 posting, “Body parts nano style,”

In early July 2011, there were reports of a new kind of transplant involving a body part made of a biocomposite. Andemariam Teklesenbet Beyene underwent a trachea transplant that required an artificial windpipe crafted by UK experts then flown to Sweden where Beyene’s stem cells were used to coat the windpipe before being transplanted into his body.

It is an extraordinary story not least because Beyene, a patient in a Swedish hospital planning to return to Eritrea after his PhD studies in Iceland, illustrates the international cooperation that made the transplant possible.

The scaffolding material for the artificial windpipe was developed by Professor Alex Seifalian at the University College London in a landmark piece of nanotechnology-enabled tissue engineering. …

Five years later I stumbled across problems with Macchiarini’s work as outlined in my April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 1 of 2)” and my other April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 2 of 2)“.

This year, Gretchen Vogel (whose work was featured in my 2016 posts) has written a June 21, 2023 update about the Macchiarini affair for Science magazine, Note: Links have been removed,

Surgeon Paolo Macchiarini, who was once hailed as a pioneer of stem cell medicine, was found guilty of gross assault against three of his patients today and sentenced to 2 years and 6 months in prison by an appeals court in Stockholm. The ruling comes a year after a Swedish district court found Macchiarini guilty of bodily harm in two of the cases and gave him a suspended sentence. After both the prosecution and Macchiarini appealed that ruling, the Svea Court of Appeal heard the case in April and May. Today’s ruling from the five-judge panel is largely a win for the prosecution—it had asked for a 5-year sentence whereas Macchiarini’s lawyer urged the appeals court to acquit him of all charges.

Macchiarini performed experimental surgeries on the three patients in 2011 and 2012 while working at the renowned Karolinska Institute. He implanted synthetic windpipes seeded with stem cells from the patients’ own bone marrow, with the hope the cells would multiply over time and provide an enduring replacement. All three patients died when the implants failed. One patient died suddenly when the implant caused massive bleeding just 4 months after it was implanted; the two others survived for 2.5 and nearly 5 years, respectively, but suffered painful and debilitating complications before their deaths.

In the ruling released today, the appeals judges disagreed with the district court’s decision that the first two patients were treated under “emergency” conditions. Both patients could have survived for a significant length of time without the surgeries, they said. The third case was an “emergency,” the court ruled, but the treatment was still indefensible because by then Macchiarini was well aware of the problems with the technique. (One patient had already died and the other had suffered severe complications.)

A fictionalized tv series ( part of the Dr. Death anthology series) based on Macchiarini’s deceptions and a Dr. Death documentary are being broadcast/streamed in the US during January 2024. These come on the heels of a November 2023 Macchiarini documentary also broadcast/streamed on US television.

Dr. Death (anthology), based on the previews I’ve seen, is heavily US-centric, which is to be expected since Adam Ciralsky is involved in the production. Ciralsky wrote an exposé about Macchiarini for Vanity Fair published in 2016 (also featured in my 2016 postings). From a December 20, 2023 article by Julie Miller for Vanity Fair, Note: A link has been removed,

Seven years ago [2016], world-renowned surgeon Paolo Macchiarini was the subject of an ongoing Vanity Fair investigation. He had seduced award-winning NBC producer Benita Alexander while she was making a special about him, proposed, and promised her a wedding officiated by Pope Francis and attended by political A-listers. It was only after her designer wedding gown was made that Alexander learned Macchiarini was still married to his wife, and seemingly had no association with the famous names on their guest list.

Vanity Fair contributor Adam Ciralsky was in the midst of reporting the story for this magazine in the fall of 2015 when he turned to Dr. Ronald Schouten, a Harvard psychiatry professor. Ciralsky sought expert insight into the kind of fabulist who would invent and engage in such an audacious lie.

“I laid out the story to him, and he said, ‘Anybody who does this in their private life engages in the same conduct in their professional life,” recalls Ciralsky, in a phone call with Vanity Fair. “I think you ought to take a hard look at his CVs.”

That was the turning point in the story for Ciralsky, a former CIA lawyer who soon learned that Macchiarini was more dangerous as a surgeon than a suitor. …

Here’s a link to Ciralsky’s original article, which I described this way, from my April 19, 2016 posting (part 2 of the Macchiarini controversy),

For some bizarre frosting on this disturbing cake (see part 1 of the Macchiarini controversy and synthetic trachea transplants for the medical science aspects), a January 5, 2016 Vanity Fair article by Adam Ciralsky documents Macchiarini’s courtship of an NBC ([US] National Broadcasting Corporation) news producer who was preparing a documentary about him and his work.

[from Ciralsky’s article]

“Macchiarini, 57, is a magnet for superlatives. He is commonly referred to as “world-renowned” and a “super-surgeon.” He is credited with medical miracles, including the world’s first synthetic organ transplant, which involved fashioning a trachea, or windpipe, out of plastic and then coating it with a patient’s own stem cells. That feat, in 2011, appeared to solve two of medicine’s more intractable problems—organ rejection and the lack of donor organs—and brought with it major media exposure for Macchiarini and his employer, Stockholm’s Karolinska Institute, home of the Nobel Prize in Physiology or Medicine. Macchiarini was now planning another first: a synthetic-trachea transplant on a child, a two-year-old Korean-Canadian girl named Hannah Warren, who had spent her entire life in a Seoul hospital. … “

Other players in the Macchiarini story

Pierre Delaere, a trachea expert and professor of head and neck surgery at KU Leuven (a university in Belgium) was one of the first to draw attention to Macchiarini’s dangerous and unethical practices. To give you an idea of how difficult it was to get attention for this issue, there’s a September 1, 2017 article by John Rasko and Carl Power for the Guardian illustrating the issue. Here’s what they had to say about Delaere and other early critics of the work, Note: Links have been removed,

Delaere was one of the earliest and harshest critics of Macchiarini’s engineered airways. Reports of their success always seemed like “hot air” to him. He could see no real evidence that the windpipe scaffolds were becoming living, functioning airways – in which case, they were destined to fail. The only question was how long it would take – weeks, months or a few years.

Delaere’s damning criticisms appeared in major medical journals, including the Lancet, but weren’t taken seriously by Karolinska’s leadership. Nor did they impress the institute’s ethics council when Delaere lodged a formal complaint. [emphases mine]

Support for Macchiarini remained strong, even as his patients began to die. In part, this is because the field of windpipe repair is a niche area. Few people at Karolinska, especially among those in power, knew enough about it to appreciate Delaere’s claims. Also, in such a highly competitive environment, people are keen to show allegiance to their superiors and wary of criticising them. The official report into the matter dubbed this the “bandwagon effect”.

With Macchiarini’s exploits endorsed by management and breathlessly reported in the media, it was all too easy to jump on that bandwagon.

And difficult to jump off. In early 2014, four Karolinska doctors defied the reigning culture of silence [emphasis mine] by complaining about Macchiarini. In their view, he was grossly misrepresenting his results and the health of his patients. An independent investigator agreed. But the vice-chancellor of Karolinska Institute, Anders Hamsten, wasn’t bound by this judgement. He officially cleared Macchiarini of scientific misconduct, allowing merely that he’d sometimes acted “without due care”.

For their efforts, the whistleblowers were punished. [emphasis mine] When Macchiarini accused one of them, Karl-Henrik Grinnemo, of stealing his work in a grant application, Hamsten found him guilty. As Grinnemo recalls, it nearly destroyed his career: “I didn’t receive any new grants. No one wanted to collaborate with me. We were doing good research, but it didn’t matter … I thought I was going to lose my lab, my staff – everything.”

This went on for three years until, just recently [2017], Grinnemo was cleared of all wrongdoing.

It is fitting that Macchiarini’s career unravelled at the Karolinska Institute. As the home of the Nobel prize in physiology or medicine, one of its ambitions is to create scientific celebrities. Every year, it gives science a show-business makeover, picking out from the mass of medical researchers those individuals deserving of superstardom. The idea is that scientific progress is driven by the genius of a few.

It’s a problematic idea with unfortunate side effects. A genius is a revolutionary by definition, a risk-taker and a law-breaker. Wasn’t something of this idea behind the special treatment Karolinska gave Macchiarini? Surely, he got away with so much because he was considered an exception to the rules with more than a whiff of the Nobel about him. At any rate, some of his most powerful friends were themselves Nobel judges until, with his fall from grace, they fell too.

The September 1, 2017 article by Rasko and Power is worth the read if you have the interest and the time. And, Delaere has written up a comprehensive analysis, which includes basic information about tracheas and more, “The Biggest Lie in Medical History” 2020, PDF, 164 pp., Creative Commons Licence).

I also want to mention Leonid Schneider, science journalist and molecular cell biologist, whose work the Macchiarini scandal on his ‘For Better Science’ website was also featured in my 2016 pieces. Schneider’s site has a page titled, ‘Macchiarini’s trachea transplant patients: the full list‘ started in 2017 and which he continues to update with new information about the patients. The latest update was made on December 20, 2023.

Promising nanomedicine research but no promises and a caveat

Most of the research mentioned here is still in the laboratory. i don’t often come across work that has made its way to clinical trials since the focus of this blog is emerging science and technology,

*If you’re interested in the business of neurotechnology, the July 17, 2023 posting highlights a very good UNESCO report on the topic.

Funky music (sound and noise)

I have couple of stories about using sound for wound healing, bioinspiration for soundproofing applications, detecting seismic activity, more data sonification, etc.

Same old, same old CRISPR

2023 was relatively quiet (no panics) where CRISPR developments are concerned but still quite active.

Art/Sci: a pretty active year

I didn’t realize how active the year was art/sciwise including events and other projects until I reviewed this year’s postings. This is a selection from 2023 but there’s a lot more on the blog, just use the search term, “art/sci,” or “art/science,” or “sciart.”

While I often feature events and projects from these groups (e.g., June 2, 2023 posting, “Metacreation Lab’s greatest hits of Summer 2023“), it’s possible for me to miss a few. So, you can check out Toronto’s Art/Sci Salon’s website (strong focus on visual art) and Simon Fraser University’s Metacreation Lab for Creative Artificial Intelligence website (strong focus on music).

My selection of this year’s postings is more heavily weighted to the ‘writing’ end of things.

Boundaries: life/nonlife

Last year I subtitled this section, ‘Aliens on earth: machinic biology and/or biological machinery?” Here’s this year’s selection,

Canada’s 2023 budget … military

2023 featured an unusual budget where military expenditures were going to be increased, something which could have implications for our science and technology research.

Then things changed as Murray Brewster’s November 21, 2023 article for the Canadian Broadcasting Corporation’s (CBC) news online website comments, Note: A link has been removed,

There was a revelatory moment on the weekend as Defence Minister Bill Blair attempted to bridge the gap between rhetoric and reality in the Liberal government’s spending plans for his department and the Canadian military.

Asked about an anticipated (and long overdue) update to the country’s defence policy (supposedly made urgent two years ago by Russia’s full-on invasion of Ukraine), Blair acknowledged that the reset is now being viewed through a fiscal lens.

“We said we’re going to bring forward a new defence policy update. We’ve been working through that,” Blair told CBC’s Rosemary Barton Live on Sunday.

“The current fiscal environment that the country faces itself does require (that) that defence policy update … recognize (the) fiscal challenges. And so it’ll be part of … our future budget processes.”

One policy goal of the existing defence plan, Strong, Secure and Engaged, was to require that the military be able to concurrently deliver “two sustained deployments of 500 [to] 1,500 personnel in two different theaters of operation, including one as a lead nation.”

In a footnote, the recent estimates said the Canadian military is “currently unable to conduct multiple operations concurrently per the requirements laid out in the 2017 Defence Policy. Readiness of CAF force elements has continued to decrease over the course of the last year, aggravated by decreasing number of personnel and issues with equipment and vehicles.”

Some analysts say they believe that even if the federal government hits its overall budget reduction targets, what has been taken away from defence — and what’s about to be taken away — won’t be coming back, the minister’s public assurances notwithstanding.

10 years: Graphene Flagship Project and Human Brain Project

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Future or not

As you can see, there was plenty of interesting stuff going on in 2023 but no watershed moments in the areas I follow. (Please do let me know in the Comments should you disagree with this or any other part of this posting.) Nanotechnology seems less and less an emerging science/technology in itself and more like a foundational element of our science and technology sectors. On that note, you may find my upcoming (in 2024) post about a report concerning the economic impact of its National Nanotechnology Initiative (NNI) from 2002 to 2022 of interest.

Following on the commercialization theme, I have noticed an increase of interest in commercializing brain and brainlike engineering technologies, as well as, more discussion about ethics.

Colonizing the brain?

UNESCO held events such as, this noted in my July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” and this noted in my July 7, 2023 posting “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” An August 21, 2023 posting, “Ethical nanobiotechnology” adds to the discussion.

Meanwhile, Australia has been producing some very interesting mind/robot research, my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story.” I have more of this kind of research (mind control or mind reading) from Australia to be published in early 2024. The Australians are not alone, there’s also this April 12, 2023 posting, “Mind-reading prosthetic limbs” from Germany.

My May 12, 2023 posting, “Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023” shows Canada is entering the discussion. Unfortunately, the Canadian Science Policy Centre (CSPC), which held the event, has not posted a video online even though they have a youtube channel featuring other of their events.

As for neurmorphic engineering, China has produced a roadmap for its research in this area as noted in my March 20, 2023 posting, “A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices.”

Quantum anybody?

I haven’t singled it out in this end-of-year posting but there is a great deal of interest in quantum computer both here in Canada and elsewhere. There is a 2023 report from the Council of Canadian Academies on the topic of quantum computing in Canada, which I hope to comment on soon.

Final words

I have a shout out for the Canadian Science Policy Centre, which celebrated its 15th anniversary in 2023. Congratulations!

For everyone, I wish peace on earth and all the best for you and yours in 2024!

Shape-changing speaker (aka acoustic swarms) for sound control

To alleviate any concerns, these swarms are not kin to Michael Crichton’s swarms in his 2002 novel, Prey or his 2011 novel, Micro (published after his death).

A September 21, 2023 news item on ScienceDaily announces this ‘acoustic swarm’ research,

In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.

The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has challenged researchers, especially without visual cues from cameras.

A team led by researchers at the University of Washington has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.

The team published its findings Sept. 21 [2023] in Nature Communications.

A September 21, 2023 University of Washington (state) news release (also on EurekAlert), which originated the news item, delves further into the work, Note: Links have been removed,

“If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author Malek Itani, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”

Previous research on robot swarms has required using overhead or on-device cameras, projectors or special surfaces. The UW team’s system is the first to accurately distribute a robot swarm using only sound.

The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones.

“If I have one microphone a foot away from me, and another microphone two feet away, my voice will arrive at the microphone that’s a foot away first. If someone else is closer to the microphone that’s two feet away, their voice will arrive there first,” said co-lead author Tuochao Chen, a UW doctoral student in the Allen School. “We developed neural networks that use these time-delayed signals to separate what each person is saying and track their positions in a space. So you can have four people having two conversations and isolate any of the four voices and locate each of the voices in a room.”

The team tested the robots in offices, living rooms and kitchens with groups of three to five people speaking. Across all these environments, the system could discern different voices within 1.6 feet (50 centimeters) of each other 90% of the time, without prior information about the number of speakers. The system was able to process three seconds of audio in 1.82 seconds on average — fast enough for live streaming, though a bit too long for real-time communications such as video calls.

As the technology progresses, researchers say, acoustic swarms might be deployed in smart homes to better differentiate people talking with smart speakers. That could potentially allow only people sitting on a couch, in an “active zone,” to vocally control a TV, for example.

Researchers plan to eventually make microphone robots that can move around rooms, instead of being limited to tables. The team is also investigating whether the speakers can emit sounds that allow for real-world mute and active zones, so people in different parts of a room can hear different audio. The current study is another step toward science fiction technologies, such as the “cone of silence” in “Get Smart” and“Dune,” the authors write.

Of course, any technology that evokes comparison to fictional spy tools will raise questions of privacy. Researchers acknowledge the potential for misuse, so they have included guards against this: The microphones navigate with sound, not an onboard camera like other similar systems. The robots are easily visible and their lights blink when they’re active. Instead of processing the audio in the cloud, as most smart speakers do, the acoustic swarms process all the audio locally, as a privacy constraint. And even though some people’s first thoughts may be about surveillance, the system can be used for the opposite, the team says.

“It has the potential to actually benefit privacy, beyond what current smart speakers allow,” Itani said. “I can say, ‘Don’t record anything around my desk,’ and our system will create a bubble 3 feet around me. Nothing in this bubble would be recorded. Or if two groups are speaking beside each other and one group is having a private conversation, while the other group is recording, one conversation can be in a mute zone, and it will remain private.”

Takuya Yoshioka, a principal research manager at Microsoft, is a co-author on this paper, and Shyam Gollakota, a professor in the Allen School, is a senior author. The research was funded by a Moore Inventor Fellow award.

Two of the paper`s authors, Malek Itani and Tuochao Chen, have written a ‘Behind the Paper’ article for Nature.com’s Electrical and Electronic Engineering Community, from their September 21, 2023 posting,

Sound is a versatile medium. In addition to being one of the primary means of communication for us humans, it serves numerous purposes for organisms across the animal kingdom. Particularly, many animals use sound to localize themselves and navigate in their environment. Bats, for example, emit ultrasonic sound pulses to move around and find food in the dark. Similar behavior can be observed in Beluga whales to avoid obstacles and locate one other.

Various animals also have a tendency to cluster together into swarms, forming a unit greater than the sum of its parts. Famously, bees agglomerate into swarms to more efficiently search for a new colony. Birds flock to evade predators. These behaviors have caught the attention of scientists for quite some time, inspiring a handful of models for crowd control, optimization and even robotics. 

A key challenge in building robot swarms for practical purposes is the ability for the robots to localize themselves, not just within the swarm, but also relative to other important landmarks. …

Here’s a link to and a citation for the paper,

Creating speech zones with self-distributing acoustic swarms by Malek Itani, Tuochao Chen, Takuya Yoshioka & Shyamnath Gollakota. Nature Communications volume 14, Article number: 5684 (2023) DOI: https://doi.org/10.1038/s41467-023-40869-8 Published: 21 September 2023

This paper is open access.

Robot that can maneuver through living lung tissue

Caption: Overview of the semiautonomous medical robot’s three stages in the lungs. Credit: Kuntz et al.

This looks like one robot operating on another robot; I guess the researchers want to emphasize the fact that this autonomous surgical procedure isn’t currently being tested on human beings.

There’s more in a September 21, 2023 news item on ScienceDaily,

Scientists have shown that their steerable lung robot can autonomously maneuver the intricacies of the lung, while avoiding important lung structures.

Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC -Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.

Their research has reached a new milestone. In a new paper, published in Science Robotics, Ron Alterovitz, PhD, in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.

Thankfully there’s a September 21, 2023 University of North Carolina (UNC) news release (also on EurekAlert), which originated the news item, to provide more information, Note: Links have been removed,

“This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”

The development of the autonomous steerable needle robot leveraged UNC’s highly collaborative culture by blending medicine, computer science, and engineering expertise. In addition to Alterovitz and Akulian, the development effort included Yueh Z. Lee, MD, PhD, at the UNC Department of Radiology, as well as Robert J. Webster III at Vanderbilt University and Alan Kuntz at the University of Utah.

The robot is made of several separate components. A mechanical control provides controlled thrust of the needle to go forward and backward and the needle design allows for steering along curved paths. The needle is made from a nickel-titanium alloy and has been laser etched to increase its flexibility, allowing it to move effortlessly through tissue.

As it moves forward, the etching on the needle allows it to steer around obstacles with ease. Other attachments, such as catheters, could be used together with the needle to perform procedures such as lung biopsies.

To drive through tissue, the needle needs to know where it is going. The research team used CT scans of the subject’s thoracic cavity and artificial intelligence to create three-dimensional models of the lung, including the airways, blood vessels, and the chosen target. Using this 3-D model and once the needle has been positioned for launch, their AI-driven software instructs it to automatically travel from “Point A” to “Point B” while avoiding important structures.

“The autonomous steerable needle we’ve developed is highly compact, but the system is packed with a suite of technologies that allow the needle to navigate autonomously in real-time,” said Alterovitz, the principal investigator on the project and senior author on the paper. “It’s akin to a self-driving car, but it navigates through lung tissue, avoiding obstacles like significant blood vessels as it travels to its destination.”

The needle can also account for respiratory motion. Unlike other organs, the lungs are constantly expanding and contracting in the chest cavity. This can make targeting especially difficult in a living, breathing subject. According to Akulian, it’s like shooting at a moving target.

The researchers tested their robot while the laboratory model performed intermittent breath holding. Every time the subject’s breath is held, the robot is programmed to move forward.

“There remain some nuances in terms of the robot’s ability to acquire targets and then actually get to them effectively,” said Akulian, who is also a member of the UNC Lineberger Comprehensive Cancer Center, “and while there’s still a lot of work to be done, I’m very excited about continuing to push the boundaries of what we can do for patients with the world-class experts that are here.”

“We plan to continue creating new autonomous medical robots that combine the strengths of robotics and AI to improve medical outcomes for patients facing a variety of health challenges while providing guarantees on patient safety,” added Alterovitz.

Here’s a link to and a citation for the paper,

Autonomous medical needle steering in vivo by Alan Kuntz, Maxwell Emerson, Tayfun Efe Ertop, Inbar Fried, Mengyu Fu, Janine Hoelscher, Margaret Rox, Jason Akulian, Erin A. Gillaspie, Yueh Z. Lee, Fabien Maldonado, Robert J. Webster III, and Ron Alterovitz. Science Robotics 20 Sep 2023 Vol 8, Issue 82 DOI: 10.1126/scirobotics.adf7614

This paper is behind a paywall.

An artificial, multisensory integrated neuron makes AI (artificial intelligence) smarter

More brainlike (neuromorphic) computing but this time, it’s all about the senses. From a September 15, 2023 news item on ScienceDaily, Note: A link has been removed,

The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.

Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work today (Sept. 15 [2023]) in Nature Communications.

A September 12, 2023 Pennsylvania State University (Penn State) news release (also on EurekAlert but published September 15, 2023) by Ashley WennersHerron, which originated the news item, provides more detail about the research,

“Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”

For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.

“This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

It’s the equivalent of seeing an “on” light on the stove and feeling heat coming off of a burner — seeing the light on doesn’t necessarily mean the burner is hot yet, but a hand only needs to feel a nanosecond of heat before the body reacts and pulls the hand away from the potential danger. The input of light and heat triggered signals that induced the hand’s response. In this case, the researchers measured the artificial neuron’s version of this by seeing signaling outputs resulted from visual and tactile input cues.

To simulate touch input, the tactile sensor used triboelectric effect, in which two layers slide against one another to produce electricity, meaning the touch stimuli was encoded into electrical impulses. To simulate visual input, the researchers shined a light into the monolayer molybdenum disulfide photo memtransistor — or a transistor that can remember visual input, like how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron — simulated as electrical output — increased when both visual and tactile signals were weak.

“Interestingly, this effect resonates remarkably well with its biological counterpart — a visual memory naturally enhances the sensitivity to tactile stimulus,” said co-first author Najam U Sakib, a third-year doctoral student in engineering science and mechanics. “When cues are weak, you need to combine them to better understand the information, and that’s what we saw in the results.”

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

“The super additive summation of weak visual and tactile cues is the key accomplishment of our research,” said co-author Andrew Pannone, a fourth-year doctoral student in engineering science and mechanics. “For this work, we only looked into two senses. We’re working to identify the proper scenario to incorporate more senses and see what benefits they may offer.”

Harikrishnan Ravichandran, a fourth-year doctoral student in engineering science and mechanics at Penn State, also co-authored this paper.

The Army Research Office and the National Science Foundation supported this work.

Here’s a link to and a citation for the paper,

A bio-inspired visuotactile neuron for multisensory integration by Muhtasim Ul Karim Sadaf, Najam U Sakib, Andrew Pannone, Harikrishnan Ravichandran & Saptarshi Das. Nature Communications volume 14, Article number: 5729 (2023) DOI: https://doi.org/10.1038/s41467-023-40686-z Published: 15 September 2023

This paper is open access.

Purifying DNA origami nanostructures with a LEGO robot

This July 20, 2023 article by Bob Yirka for phys.org highlights some frugal science, Note: A link has been removed,

A team of bioengineers at Arizona State University has found a way to use a LEGO robot as a gradient mixer in one part of a process to create DNA origami nanostructures. In their paper published on the open-access site PLOS [Public Library of Science] ONE, the group describes how they made their mixer and its performance.

To create DNA origami structures, purification of DNA [deoxyribonucleic acid] origami nanostructures is required. This is typically done using rate-zone centrifugation, which involves the use of a relatively expensive piece of a machinery, a gradient mixer. In this new effort, the team at ASU has found that it is possible to build such a mixer using off-the-shelf LEGO kits.

I found a video provided by MindSpark Media describing the process on YouTube,

I’d love to know who paid for the video and why. This is pretty slick and it’s not from the Arizona State University’s (ASU) media team.

It gets more interesting on the MindSpark Media About webpage,

MindSpark Media is an independent media unit focusing on all major Media & Marketing services that includes Media Buying and Selling activities, bringing out special features on various supplements/country reports and international features on topics of interest in association with various leading English & Arabic vernaculars in the UAE [United Arab Emirates] and across MENA [Middle East and North Africa].

MindSpark Media is a complete media-selling experience that offers its clientele a wholesome exposure to the best media brands in the country. We also offer an opportunity to meet up and interact with the top brass of the industry & corporates for their advertorial packages including one-to-one interviews with photo-shoot sessions etc.

MindSpark Media delivers client-tailored advertorials that includes their product advertisements, features and interviews published in the form of special reports, supplements & special features, which are released and distributed with top-notch publications in the UAE.

We also focus on advertising activities in the media-buying sector such as Print, Outdoor, TV, Radio and Corporate Video, e-commerce & web-designing for clients in the UAE, MENA and beyond.

Perhaps the researchers are hoping to commercialize the work in some fashion? I couldn’t find any mention of a startup or other commercial entity but it’s a common practice these days in the US and, increasingly, many other countries.

Getting back to the research, here’s a link to and a citation for the paper,

Gradient-mixing LEGO robots for purifying DNA origami nanostructures of multiple components by rate-zonal centrifugation by Jason Sentosa, Franky Djutanta, Brian Horne, Dominic Showkeir, Robert Rezvani, Chloe Leff, Swechchha Pradhan, Rizal F. Hariadi. PLOS ONE (2023). DOI: 10.1371/journal.pone.0283134 Published: July 19, 2023

This paper is open access.

Big Conversation Season (podcast) Finale on ‘AI and the Future of Humanity’ available on Friday, September 22, 2023

Three guys (all Brits) talking about this question “Robot Race: Could AI Ever Replace Humanity (part 1)” is part of a larger video podcast series known as the ‘Big Conversation’ and part 2 of this ‘Big Conversation’ is going to be available on Friday, September 22, 2023.

I haven’t listened to the entire first part of the conversation yet. So far, it seems quite engaging and provocative (especially the first five minutes). They’re not arguing but since I don’t want to spoil the surprise do watch the first bit (the first 5 mins. of a 53 mins. 38 secs. podcast).

You can’t ask more of a conversation than to be provoked into thinking. That said …

Pause

I’m a little hesitant to include much about faith and religion here but this two-part series touches on topics that have been discussed here many times. So, the ‘Big Conversation’ is produced through a Christian group. Here’s more about the podcast series and its producers from the Big Conversation webpage,

he Big Conversation is a video series from Premier Unbelievable? featuring world-class thinkers across the religious and non-religious communities. Exploring science, faith, philosophy and what it means to be human [emphasis mine]. The Big Conversation is produced by Premier in partnership with John Templeton Foundation.

Premier consists of Premier Christian Media Trust registered as a charity (no. 287610) and as a company limited by guarantee (no. 01743091) with two fully-owned trading subsidiaries: Premier Christian Communications Ltd (no. 02816074) and Christian Communication Partnership Ltd (no. 03422292). All three companies are registered in England & Wales with a registered office address of Unit 6 April Court, Syborn Way, Crowborough, TN6 3DZ.

I haven’t seen any signs of proselytizing and like almost every other website in existence, they are very interested in getting you to be on their newsletter email list, to donate, etc.

Back to the conversation.

The Robot Race, Parts I & 2: Could AI ever replace humanity?

Here’s a description of the Big Conversation series and two specific podcasts, from the September 20, 2023 press release (received via email),

Big Conversation Season Finale on AI and the Future of Humanity Available this Friday

Featuring AI expert Dr. Nigel Crook, episode explores ‘The Robot Race: Could AI ever replace humans?’

WHAT: 
Currently in its 5th season, The Big Conversation, hosted by comedian and apologist Andy Kind, features some of the biggest minds in the Christian, atheist and religious world to debate some of the biggest questions of science, faith, philosophy and what it means to be human. 

Episodes 5 & 6 of this season feature a two-part discussion about robotics, the future of artificial intelligence and the subsequent concerns of morality surrounding these advancements. This thought-provoking exchange on ethics in AI is sure to leave listeners informed and intrigued to learn more regarding the future of humanity relating to cyber-dependency, automation regulation, AI agency and abuses of power in technology.

WHO:  
To help us understand the complexities of AI, including the power and ethics around the subject – and appropriate concern for the future of humanity – The Big Conversation host Andy Kind spoke with AI Expert Dr. Nigel Crook and Neuroscientist Anil Seth.   

Dr. Nigel Crook, a distinguished figure recognized for his innovative contributions to the realm of AI and robotics, focuses extensively on research related to machine learning inspired by biological processes and the domain of social robotics. He serves as the Professor of Artificial Intelligence and Robotics at Oxford Brooks University and is the Founding Director at the Institute for Ethical AI, specifically revolving around the concept of self-governing ethical robots.

WHEN:  
Episode 5, the first in the two-part AI series, released September 8 [2023], and episode 6 releases Friday, Sept. 22 [2023].  

WHERE:  
These episodes are available at https://www.thebigconversation.show/ as well as all major podcast platforms.  

I have a little more about Anil Seth from the Big Conversation Episode 5 webpage,

… Anil Seth, Professor of Cognitive & Computational Neuroscience at the University of Sussex, winner of The Michael Faraday Prize and Lecture 2023, and author of “Being You: A New Science of Consciousness”

There’s also a bit about Seth in my June 30, 2017 posting “A question of consciousness: Facebotlish (a new language); a July 5, 2017 rap guide performance in Vancouver, Canada; Tom Stoppard’s play; and a little more,” scroll down to the subhead titled ‘Vancouver premiere of Baba Brinkman’s Rap Guide to Consciousness’.