Category Archives: robots

Living technology possibilities

Before launching into the possibilities, here are two descriptions of ‘living technology’ from the European Centre for Living Technology’s (ECLT) homepage,

Goals

Promote, carry out and coordinate research activities and the diffusion of scientific results in the field of living technology. The scientific areas for living technology are the nano-bio-technologies, self-organizing and evolving information and production technologies, and adaptive complex systems.

History

Founded in 2004 the European Centre for Living Technology is an international and interdisciplinary research centre established as an inter-university consortium, currently involving 18 European and extra-European institutional affiliates.

The Centre is devoted to the study of technologies that exhibit life-like properties including self-organization, adaptability and the capacity to evolve.

Despite the reference to “nano-bio-technologies,” this October 11, 2023 news item on ScienceDaily focuses on microscale living technology,

It is noIn a recent article in the high-profile journal “Advanced Materials,” researchers in Chemnitz show just how close and necessary the transition to sustainable living technology is, based on the morphogenesis of self-assembling microelectronic modules, strengthening the recent membership of Chemnitz University of Technology with the European Centre for Living Technology (ECLT) in Venice.

An October 11, 2023 Chemnitz University of Technology (Technische Universität Chemnitz; TU Chemnitz) press release (also on EurekAlert), which originated the news item, delves further into the topic, Note: Links have been removed,

It is now apparent that the mass-produced artefacts of technology in our increasingly densely populated world – whether electronic devices, cars, batteries, phones, household appliances, or industrial robots – are increasingly at odds with the sustainable bounded ecosystems achieved by living organisms based on cells over millions of years. Cells provide organisms with soft and sustainable environmental interactions with complete recycling of material components, except in a few notable cases like the creation of oxygen in the atmosphere, and of the fossil fuel reserves of oil and coal (as a result of missing biocatalysts). However, the fantastic information content of biological cells (gigabits of information in DNA alone) and the complexities of protein biochemistry for metabolism seem to place a cellular approach well beyond the current capabilities of technology, and prevent the development of intrinsically sustainable technology.

SMARTLETs: tiny shape-changing modules that collectively self-organize to larger more complex systems

A recent perspective review published in the very high impact journal Advanced Materials this month [October 2023] by researchers at the Research Center for Materials, Architectures and Integration of Nanomembranes (MAIN) of Chemnitz University of Technology, shows how a novel form of high-information-content Living Technology is now within reach, based on microrobotic electronic modules called SMARTLETs, which will soon be capable of self-assembling into complex artificial organisms. The research belongs to the new field of Microelectronic Morphogenesis, the creation of form under microelectronic control, and builds on work over the previous years at Chemnitz University of Technology to construct self-folding and self-locomoting thin film electronic modules, now carrying tiny silicon chiplets between the folds, for a massive increase in information processing capabilities. Sufficient information can now be stored in each module to encode not only complex functions but fabrication recipes (electronic genomes) for clean rooms to allow the modules to be copied and evolved like cells, but safely because of the gating of reproduction through human operated clean room facilities.

Electrical self-awareness during self-assembly

In addition, the chiplets can provide neuromorphic learning capabilities allowing them to improve performance during operation. A further key feature of the specific self-assembly of these modules, based on matching physical bar codes, is that electrical and fluidic connections can be achieved between modules. These can then be employed, to make the electronic chiplets on board “aware” of the state of assembly, and of potential errors, allowing them to direct repair, correct mis-assembly, induce disassembly and form collective functions spanning many modules. Such functions include extended communication (antennae), power harvesting and redistribution, remote sensing, material redistribution etc.

So why is this technology vital for sustainability?

The complete digital fab description for modules, for which actually only a limited number of types are required even for complex organisms, allows their material content, responsible originator and environmentally relevant exposure all to be read out. Prof. Dagmar Nuissl-Gesmann from the Law Department at Chemnitz University of Technology observes that “this fine-grained documentation of responsibility intrinsic down to microscopic scales will be a game changer in allowing legal assignment of environmental and social responsibility for our technical artefacts”.

Furthermore, the self-locomotion and self-assembly-disassembly capabilities allows the modules to self-sort for recycling. Modules can be regained, reused, reconfigured, and redeployed in different artificial organisms. If they are damaged, then their limited and documented types facilitate efficient custom recycling of materials with established and optimized protocols for these sorted and now identical entities. These capabilities complement the other more obvious advantages in terms of design development and reuse in this novel reconfigurable media. As Prof. Marlen Arnold, an expert in Sustainability of the Faculty of Economics and Business Administration observes, “Even at high volumes of deployment use, these properties could provide this technology with a hitherto unprecedented level of sustainability which would set the bar for future technologies to share our planet safely with us.”

Contribution to European Living Technology

This research is a first contribution of MAIN/Chemnitz University of Technology, as a new member of the European Centre for Living Technology ECLT, based in Venice,” says Prof. Oliver G. Schmidt, Scientific Director of the Research Center MAIN and adds that “It’s fantastic to see that our deep collaboration with ECLT is paying off so quickly with immediate transdisciplinary benefit for several scientific communities.” “Theoretical research at the ECLT has been urgently in need of novel technology systems able to implement the core properties of living systems.” comments Prof. John McCaskill, coauthor of the paper, and a grounding director of the ECLT in 2004.

Here’s a link to and a citation for the researchers’ perspective paper,

Microelectronic Morphogenesis: Smart Materials with Electronics Assembling into Artificial Organisms by John S. McCaskill, Daniil Karnaushenko, Minshen Zhu, Oliver G. Schmidt. Advanced Materials DOI: https://doi.org/10.1002/adma.202306344 First published: 09 October 2023

This paper is open access.

XoMotion, an exoskeleton developed in Canada causes commotion

I first stumbled across these researchers in 2016 when their project was known as “Wearable Lower Limb Anthropomorphic Exoskeleton (WLLAE).” In my January 20, 2016 posting, “#BCTECH: being at the Summit (Jan. 18-19, 2016),” an event put on by the province of British Columbia (BC, Canada) and the BC Innovation Council (BCIC), I visited a number of booths and talks at the #BC TECH Summit and had this to say about WLLAE,

“The Wearable Lower Limb Anthropomorphic Exoskeleton (WLLAE) – a lightweight, battery-operated and ergonomic robotic system to help those with mobility issues improve their lives. The exoskeleton features joints and links that correspond to those of a human body and sync with motion. SFU has designed, manufactured and tested a proof-of-concept prototype and the current version can mimic all the motions of hip joints.” The researchers (Siamak Arzanpour and Edward Park) pointed out that the ability to mimic all the motions of the hip is a big difference between their system and others which only allow the leg to move forward or back. They rushed the last couple of months to get this system ready for the Summit. In fact, they received their patent for the system the night before (Jan. 17, 2016) the Summit opened.

Unfortunately, there aren’t any pictures of WLLAE yet and the proof-of-concept version may differ significantly from the final version. This system could be used to help people regain movement (paralysis/frail seniors) and I believe there’s a possibility it could be used to enhance human performance (soldiers/athletes). The researchers still have some significant hoops to jump before getting to the human clinical trial stage. They need to refine their apparatus, ensure that it can be safely operated, and further develop the interface between human and machine. I believe WLLAE is considered a neuroprosthetic device. While it’s not a fake leg or arm, it enables movement (prosthetic) and it operates on brain waves (neuro). It’s a very exciting area of research, consequently, there’s a lot of international competition. [ETA January 3, 2024: I’m pretty sure I got the neuroprosthetic part wrong]

Time moved on and there was a name change and then there was this November 10, 2023 article by Jeremy Hainsworth for the Vancouver is Awesome website,

Vancouver-based fashion designer Chloe Angus thought she’d be in a wheelchair for the rest of her life after being diagnosed with an inoperable benign tumour in her spinal cord in 2015, resulting in permanent loss of mobility in her legs.

Now, however, she’s been using a state-of-the-art robotic exoskeleton known as XoMotion that can help physically disabled people self-balance, walk, sidestep, climb stairs and crouch.

“The first time I walked with the exoskeleton was a jaw-dropping experience,” said Angus. “After all these years, the exoskeleton let me stand up and walk on my own without falling. I felt like myself again.”

She added the exoskeleton has the potential to completely change the world for people with motion disabilities.

XoMotion is the result of a decade of research and the product of a Simon Fraser University spinoff company, Human in Motion Robotics (HMR) Inc. It’s the brainchild of professors Siamak Arzanpour and Edward Park.

Arzanpour and Park, both researchers in the Burnaby-based university’s School of Mechatronic Systems Engineering, began work on the device in 2014. They had a vision to enhance exoskeleton technology and empower individuals with mobility challenges to have more options for movement.

“We felt that there was an immediate need to help people with motion disabilities to walk again, with a full range of motion. At the time, exoskeletons could only walk forward. That was the only motion possible,” Arzanpour said.

A November 15, 2023 article (with an embedded video) by Amy Judd & Alissa Thibault for Global News (television) highlights Alexander’s story,

SFU professors Siamak Arzanpour and Edward Park wanted to help people with motion disabilities to walk freely, naturally and independently.

The exoskeleton [XoMotion] is now the most advanced of its kind in the world.

Chloe Angus, who lost her mobility in her legs in 2015, now works for the team.

She said the exoskeleton makes her feel like herself again.

She was diagnosed with an inoperable benign tumor in her spinal cord in 2015 which resulted in a sudden and permanent loss of mobility in her legs. At the time, doctors told Angus that she would need a wheelchair to move for the rest of her life.

Now she is part of the project and defying all odds.

“After all these years, the exoskeleton let me stand up and walk on my own without falling. I felt like myself again.”

There’s a bit more information in the November 8, 2023 Simon Fraser University (SFU) news release (which has the same embedded video as the Global News article) by Ray Sharma,

The state-of-the-art robotic exoskeleton known as XoMotion is the result of a decade of research and the product of an SFU spin off company, Human in Motion Robotics (HMR) Inc. The company has recently garnered millions in investment, an overseas partnership and a suite of new offices in Vancouver.

XoMotion allows individuals with mobility challenges to stand up and walk on their own, without additional support. When in use, XoMotion maintains its stability and simultaneously encompasses all the ranges of motion and degrees of freedom needed for users to self-balance, walk, sidestep, climb stairs, crouch, and more. 

Sensors within the lower-limb exoskeleton mimic the human body’s sense of logic to identify structures along the path, and in-turn, generate a fully balanced motion.

SFU professors Siamak Arzanpour and Edward Park, both researchers in the School of Mechatronic Systems Engineering, began work on the device in 2014 with a vision to enhance exoskeleton technology and empower individuals with mobility challenges to have more options for movement. 

“We felt that there was an immediate need to help people with motion disabilities to walk again, with a full range of motion. At the time, exoskeletons could only walk forward. That was the only motion possible,” says Arzanpour. 

The SFU professors, who first met in 2001 as graduate students at the University of Toronto, co-founded HMR in 2016, bringing together a group of students, end-users, therapists, and organizations to build upon the exoskeleton. Currently, 70 per cent of HMR employees are SFU graduates. 

In recent years, HMR has garnered multiple streams of investment, including a contract with Innovative Solutions Canada, and $10 million in funding during their Series A round in May, including an $8 million investment and strategic partnership from Beno TNR, a prominent Korean technology investment firm.

I decided to bring the embedded video here, it runs a little over 2 mins.,

You can find the Human in Robotics (HMR) website here.

FrogHeart’s 2023 comes to an end as 2024 comes into view

My personal theme for this last year (2023) and for the coming year was and is: catching up. On the plus side, my 2023 backlog (roughly six months) to be published was whittled down considerably. On the minus side, I start 2024 with a backlog of two to three months.

2023 on this blog had a lot in common with 2022 (see my December 31, 2022 posting), which may be due to what’s going on in the world of emerging science and technology or to my personal interests or possibly a bit of both. On to 2023 and a further blurring of boundaries:

Energy, computing and the environment

The argument against paper is that it uses up resources, it’s polluting, it’s affecting the environment, etc. Somehow the part where electricity which underpins so much of our ‘smart’ society does the same thing is left out of the discussion.

Neuromorphic (brainlike) computing and lower energy

Before launching into the stories about lowering energy usage, here’s an October 16, 2023 posting “The cost of building ChatGPT” that gives you some idea of the consequences of our insatiable desire for more computing and more ‘smart’ devices,

In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]

“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.

Why it matters: Microsoft’s five WDM [West Des Moines in Iowa] data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.

Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.

The focus is AI but it doesn’t take long to realize that all computing has energy and environmental costs. I have more about Ren’s work and about water shortages in the “The cost of building ChatGPT” posting.

This next posting would usually be included with my other art/sci postings but it touches on the issues. My October 13, 2023 posting about Toronto’s Art/Sci Salon events, in particular, there’s the Streaming Carbon Footprint event (just scroll down to the appropriate subhead). For the interested, I also found this 2022 paper “The Carbon Footprint of Streaming Media:; Problems, Calculations, Solutions” co-authored by one of the artist/researchers (Laura U. Marks, philosopher and scholar of new media and film at Simon Fraser University) who presented at the Toronto event.

I’m late to the party; Thomas Daigle posted a January 2, 2020 article about energy use and our appetite for computing and ‘smart’ devices for the Canadian Broadcasting Corporation’s online news,

For those of us binge-watching TV shows, installing new smartphone apps or sharing family photos on social media over the holidays, it may seem like an abstract predicament.

The gigabytes of data we’re using — although invisible — come at a significant cost to the environment. Some experts say it rivals that of the airline industry. 

And as more smart devices rely on data to operate (think internet-connected refrigerators or self-driving cars), their electricity demands are set to skyrocket.

“We are using an immense amount of energy to drive this data revolution,” said Jane Kearns, an environment and technology expert at MaRS Discovery District, an innovation hub in Toronto.

“It has real implications for our climate.”

Some good news

Researchers are working on ways to lower the energy and environmental costs, here’s a sampling of 2023 posts with an emphasis on brainlike computing that attest to it,

If there’s an industry that can make neuromorphic computing and energy savings sexy, it’s the automotive indusry,

On the energy front,

Most people are familiar with nuclear fission and some its attendant issues. There is an alternative nuclear energy, fusion, which is considered ‘green’ or greener anyway. General Fusion is a local (Vancouver area) company focused on developing fusion energy, alongside competitors from all over the planet.

Part of what makes fusion energy attractive is that salt water or sea water can be used in its production and, according to that December posting, there are other applications for salt water power,

More encouraging developments in environmental science

Again, this is a selection. You’ll find a number of nano cellulose research projects and a couple of seaweed projects (seaweed research seems to be of increasing interest).

All by myself (neuromorphic engineering)

Neuromorphic computing is a subset of neuromorphic engineering and I stumbled across an article that outlines the similarities and differences. My ‘summary’ of the main points and a link to the original article can be found here,

Oops! I did it again. More AI panic

I included an overview of the various ‘recent’ panics (in my May 25, 2023 posting below) along with a few other posts about concerning developments but it’s not all doom and gloom..

Governments have realized that regulation might be a good idea. The European Union has a n AI act, the UK held an AI Safety Summit in November 2023, the US has been discussing AI regulation with its various hearings, and there’s impending legislation in Canada (see professor and lawyer Michael Geist’s blog for more).

A long time coming, a nanomedicine comeuppance

Paolo Macchiarini is now infamous for his untested, dangerous approach to medicine. Like a lot of people, I was fooled too as you can see in my August 2, 2011 posting, “Body parts nano style,”

In early July 2011, there were reports of a new kind of transplant involving a body part made of a biocomposite. Andemariam Teklesenbet Beyene underwent a trachea transplant that required an artificial windpipe crafted by UK experts then flown to Sweden where Beyene’s stem cells were used to coat the windpipe before being transplanted into his body.

It is an extraordinary story not least because Beyene, a patient in a Swedish hospital planning to return to Eritrea after his PhD studies in Iceland, illustrates the international cooperation that made the transplant possible.

The scaffolding material for the artificial windpipe was developed by Professor Alex Seifalian at the University College London in a landmark piece of nanotechnology-enabled tissue engineering. …

Five years later I stumbled across problems with Macchiarini’s work as outlined in my April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 1 of 2)” and my other April 19, 2016 posting, “Macchiarini controversy and synthetic trachea transplants (part 2 of 2)“.

This year, Gretchen Vogel (whose work was featured in my 2016 posts) has written a June 21, 2023 update about the Macchiarini affair for Science magazine, Note: Links have been removed,

Surgeon Paolo Macchiarini, who was once hailed as a pioneer of stem cell medicine, was found guilty of gross assault against three of his patients today and sentenced to 2 years and 6 months in prison by an appeals court in Stockholm. The ruling comes a year after a Swedish district court found Macchiarini guilty of bodily harm in two of the cases and gave him a suspended sentence. After both the prosecution and Macchiarini appealed that ruling, the Svea Court of Appeal heard the case in April and May. Today’s ruling from the five-judge panel is largely a win for the prosecution—it had asked for a 5-year sentence whereas Macchiarini’s lawyer urged the appeals court to acquit him of all charges.

Macchiarini performed experimental surgeries on the three patients in 2011 and 2012 while working at the renowned Karolinska Institute. He implanted synthetic windpipes seeded with stem cells from the patients’ own bone marrow, with the hope the cells would multiply over time and provide an enduring replacement. All three patients died when the implants failed. One patient died suddenly when the implant caused massive bleeding just 4 months after it was implanted; the two others survived for 2.5 and nearly 5 years, respectively, but suffered painful and debilitating complications before their deaths.

In the ruling released today, the appeals judges disagreed with the district court’s decision that the first two patients were treated under “emergency” conditions. Both patients could have survived for a significant length of time without the surgeries, they said. The third case was an “emergency,” the court ruled, but the treatment was still indefensible because by then Macchiarini was well aware of the problems with the technique. (One patient had already died and the other had suffered severe complications.)

A fictionalized tv series ( part of the Dr. Death anthology series) based on Macchiarini’s deceptions and a Dr. Death documentary are being broadcast/streamed in the US during January 2024. These come on the heels of a November 2023 Macchiarini documentary also broadcast/streamed on US television.

Dr. Death (anthology), based on the previews I’ve seen, is heavily US-centric, which is to be expected since Adam Ciralsky is involved in the production. Ciralsky wrote an exposé about Macchiarini for Vanity Fair published in 2016 (also featured in my 2016 postings). From a December 20, 2023 article by Julie Miller for Vanity Fair, Note: A link has been removed,

Seven years ago [2016], world-renowned surgeon Paolo Macchiarini was the subject of an ongoing Vanity Fair investigation. He had seduced award-winning NBC producer Benita Alexander while she was making a special about him, proposed, and promised her a wedding officiated by Pope Francis and attended by political A-listers. It was only after her designer wedding gown was made that Alexander learned Macchiarini was still married to his wife, and seemingly had no association with the famous names on their guest list.

Vanity Fair contributor Adam Ciralsky was in the midst of reporting the story for this magazine in the fall of 2015 when he turned to Dr. Ronald Schouten, a Harvard psychiatry professor. Ciralsky sought expert insight into the kind of fabulist who would invent and engage in such an audacious lie.

“I laid out the story to him, and he said, ‘Anybody who does this in their private life engages in the same conduct in their professional life,” recalls Ciralsky, in a phone call with Vanity Fair. “I think you ought to take a hard look at his CVs.”

That was the turning point in the story for Ciralsky, a former CIA lawyer who soon learned that Macchiarini was more dangerous as a surgeon than a suitor. …

Here’s a link to Ciralsky’s original article, which I described this way, from my April 19, 2016 posting (part 2 of the Macchiarini controversy),

For some bizarre frosting on this disturbing cake (see part 1 of the Macchiarini controversy and synthetic trachea transplants for the medical science aspects), a January 5, 2016 Vanity Fair article by Adam Ciralsky documents Macchiarini’s courtship of an NBC ([US] National Broadcasting Corporation) news producer who was preparing a documentary about him and his work.

[from Ciralsky’s article]

“Macchiarini, 57, is a magnet for superlatives. He is commonly referred to as “world-renowned” and a “super-surgeon.” He is credited with medical miracles, including the world’s first synthetic organ transplant, which involved fashioning a trachea, or windpipe, out of plastic and then coating it with a patient’s own stem cells. That feat, in 2011, appeared to solve two of medicine’s more intractable problems—organ rejection and the lack of donor organs—and brought with it major media exposure for Macchiarini and his employer, Stockholm’s Karolinska Institute, home of the Nobel Prize in Physiology or Medicine. Macchiarini was now planning another first: a synthetic-trachea transplant on a child, a two-year-old Korean-Canadian girl named Hannah Warren, who had spent her entire life in a Seoul hospital. … “

Other players in the Macchiarini story

Pierre Delaere, a trachea expert and professor of head and neck surgery at KU Leuven (a university in Belgium) was one of the first to draw attention to Macchiarini’s dangerous and unethical practices. To give you an idea of how difficult it was to get attention for this issue, there’s a September 1, 2017 article by John Rasko and Carl Power for the Guardian illustrating the issue. Here’s what they had to say about Delaere and other early critics of the work, Note: Links have been removed,

Delaere was one of the earliest and harshest critics of Macchiarini’s engineered airways. Reports of their success always seemed like “hot air” to him. He could see no real evidence that the windpipe scaffolds were becoming living, functioning airways – in which case, they were destined to fail. The only question was how long it would take – weeks, months or a few years.

Delaere’s damning criticisms appeared in major medical journals, including the Lancet, but weren’t taken seriously by Karolinska’s leadership. Nor did they impress the institute’s ethics council when Delaere lodged a formal complaint. [emphases mine]

Support for Macchiarini remained strong, even as his patients began to die. In part, this is because the field of windpipe repair is a niche area. Few people at Karolinska, especially among those in power, knew enough about it to appreciate Delaere’s claims. Also, in such a highly competitive environment, people are keen to show allegiance to their superiors and wary of criticising them. The official report into the matter dubbed this the “bandwagon effect”.

With Macchiarini’s exploits endorsed by management and breathlessly reported in the media, it was all too easy to jump on that bandwagon.

And difficult to jump off. In early 2014, four Karolinska doctors defied the reigning culture of silence [emphasis mine] by complaining about Macchiarini. In their view, he was grossly misrepresenting his results and the health of his patients. An independent investigator agreed. But the vice-chancellor of Karolinska Institute, Anders Hamsten, wasn’t bound by this judgement. He officially cleared Macchiarini of scientific misconduct, allowing merely that he’d sometimes acted “without due care”.

For their efforts, the whistleblowers were punished. [emphasis mine] When Macchiarini accused one of them, Karl-Henrik Grinnemo, of stealing his work in a grant application, Hamsten found him guilty. As Grinnemo recalls, it nearly destroyed his career: “I didn’t receive any new grants. No one wanted to collaborate with me. We were doing good research, but it didn’t matter … I thought I was going to lose my lab, my staff – everything.”

This went on for three years until, just recently [2017], Grinnemo was cleared of all wrongdoing.

It is fitting that Macchiarini’s career unravelled at the Karolinska Institute. As the home of the Nobel prize in physiology or medicine, one of its ambitions is to create scientific celebrities. Every year, it gives science a show-business makeover, picking out from the mass of medical researchers those individuals deserving of superstardom. The idea is that scientific progress is driven by the genius of a few.

It’s a problematic idea with unfortunate side effects. A genius is a revolutionary by definition, a risk-taker and a law-breaker. Wasn’t something of this idea behind the special treatment Karolinska gave Macchiarini? Surely, he got away with so much because he was considered an exception to the rules with more than a whiff of the Nobel about him. At any rate, some of his most powerful friends were themselves Nobel judges until, with his fall from grace, they fell too.

The September 1, 2017 article by Rasko and Power is worth the read if you have the interest and the time. And, Delaere has written up a comprehensive analysis, which includes basic information about tracheas and more, “The Biggest Lie in Medical History” 2020, PDF, 164 pp., Creative Commons Licence).

I also want to mention Leonid Schneider, science journalist and molecular cell biologist, whose work the Macchiarini scandal on his ‘For Better Science’ website was also featured in my 2016 pieces. Schneider’s site has a page titled, ‘Macchiarini’s trachea transplant patients: the full list‘ started in 2017 and which he continues to update with new information about the patients. The latest update was made on December 20, 2023.

Promising nanomedicine research but no promises and a caveat

Most of the research mentioned here is still in the laboratory. i don’t often come across work that has made its way to clinical trials since the focus of this blog is emerging science and technology,

*If you’re interested in the business of neurotechnology, the July 17, 2023 posting highlights a very good UNESCO report on the topic.

Funky music (sound and noise)

I have couple of stories about using sound for wound healing, bioinspiration for soundproofing applications, detecting seismic activity, more data sonification, etc.

Same old, same old CRISPR

2023 was relatively quiet (no panics) where CRISPR developments are concerned but still quite active.

Art/Sci: a pretty active year

I didn’t realize how active the year was art/sciwise including events and other projects until I reviewed this year’s postings. This is a selection from 2023 but there’s a lot more on the blog, just use the search term, “art/sci,” or “art/science,” or “sciart.”

While I often feature events and projects from these groups (e.g., June 2, 2023 posting, “Metacreation Lab’s greatest hits of Summer 2023“), it’s possible for me to miss a few. So, you can check out Toronto’s Art/Sci Salon’s website (strong focus on visual art) and Simon Fraser University’s Metacreation Lab for Creative Artificial Intelligence website (strong focus on music).

My selection of this year’s postings is more heavily weighted to the ‘writing’ end of things.

Boundaries: life/nonlife

Last year I subtitled this section, ‘Aliens on earth: machinic biology and/or biological machinery?” Here’s this year’s selection,

Canada’s 2023 budget … military

2023 featured an unusual budget where military expenditures were going to be increased, something which could have implications for our science and technology research.

Then things changed as Murray Brewster’s November 21, 2023 article for the Canadian Broadcasting Corporation’s (CBC) news online website comments, Note: A link has been removed,

There was a revelatory moment on the weekend as Defence Minister Bill Blair attempted to bridge the gap between rhetoric and reality in the Liberal government’s spending plans for his department and the Canadian military.

Asked about an anticipated (and long overdue) update to the country’s defence policy (supposedly made urgent two years ago by Russia’s full-on invasion of Ukraine), Blair acknowledged that the reset is now being viewed through a fiscal lens.

“We said we’re going to bring forward a new defence policy update. We’ve been working through that,” Blair told CBC’s Rosemary Barton Live on Sunday.

“The current fiscal environment that the country faces itself does require (that) that defence policy update … recognize (the) fiscal challenges. And so it’ll be part of … our future budget processes.”

One policy goal of the existing defence plan, Strong, Secure and Engaged, was to require that the military be able to concurrently deliver “two sustained deployments of 500 [to] 1,500 personnel in two different theaters of operation, including one as a lead nation.”

In a footnote, the recent estimates said the Canadian military is “currently unable to conduct multiple operations concurrently per the requirements laid out in the 2017 Defence Policy. Readiness of CAF force elements has continued to decrease over the course of the last year, aggravated by decreasing number of personnel and issues with equipment and vehicles.”

Some analysts say they believe that even if the federal government hits its overall budget reduction targets, what has been taken away from defence — and what’s about to be taken away — won’t be coming back, the minister’s public assurances notwithstanding.

10 years: Graphene Flagship Project and Human Brain Project

Graphene and Human Brain Project win biggest research award in history (& this is the 2000th post)” on January 28, 2013 was how I announced the results of what had been a a European Union (EU) competition that stretched out over several years and many stages as projects were evaluated and fell to the wayside or were allowed onto the next stage. The two finalists received €1B each to be paid out over ten years.

Future or not

As you can see, there was plenty of interesting stuff going on in 2023 but no watershed moments in the areas I follow. (Please do let me know in the Comments should you disagree with this or any other part of this posting.) Nanotechnology seems less and less an emerging science/technology in itself and more like a foundational element of our science and technology sectors. On that note, you may find my upcoming (in 2024) post about a report concerning the economic impact of its National Nanotechnology Initiative (NNI) from 2002 to 2022 of interest.

Following on the commercialization theme, I have noticed an increase of interest in commercializing brain and brainlike engineering technologies, as well as, more discussion about ethics.

Colonizing the brain?

UNESCO held events such as, this noted in my July 17, 2023 posting, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends—a UNESCO report” and this noted in my July 7, 2023 posting “Global dialogue on the ethics of neurotechnology on July 13, 2023 led by UNESCO.” An August 21, 2023 posting, “Ethical nanobiotechnology” adds to the discussion.

Meanwhile, Australia has been producing some very interesting mind/robot research, my June 13, 2023 posting, “Mind-controlled robots based on graphene: an Australian research story.” I have more of this kind of research (mind control or mind reading) from Australia to be published in early 2024. The Australians are not alone, there’s also this April 12, 2023 posting, “Mind-reading prosthetic limbs” from Germany.

My May 12, 2023 posting, “Virtual panel discussion: Canadian Strategies for Responsible Neurotechnology Innovation on May 16, 2023” shows Canada is entering the discussion. Unfortunately, the Canadian Science Policy Centre (CSPC), which held the event, has not posted a video online even though they have a youtube channel featuring other of their events.

As for neurmorphic engineering, China has produced a roadmap for its research in this area as noted in my March 20, 2023 posting, “A nontraditional artificial synaptic device and roadmap for Chinese research into neuromorphic devices.”

Quantum anybody?

I haven’t singled it out in this end-of-year posting but there is a great deal of interest in quantum computer both here in Canada and elsewhere. There is a 2023 report from the Council of Canadian Academies on the topic of quantum computing in Canada, which I hope to comment on soon.

Final words

I have a shout out for the Canadian Science Policy Centre, which celebrated its 15th anniversary in 2023. Congratulations!

For everyone, I wish peace on earth and all the best for you and yours in 2024!

Shape-changing speaker (aka acoustic swarms) for sound control

To alleviate any concerns, these swarms are not kin to Michael Crichton’s swarms in his 2002 novel, Prey or his 2011 novel, Micro (published after his death).

A September 21, 2023 news item on ScienceDaily announces this ‘acoustic swarm’ research,

In virtual meetings, it’s easy to keep people from talking over each other. Someone just hits mute. But for the most part, this ability doesn’t translate easily to recording in-person gatherings. In a bustling cafe, there are no buttons to silence the table beside you.

The ability to locate and control sound — isolating one person talking from a specific location in a crowded room, for instance — has challenged researchers, especially without visual cues from cameras.

A team led by researchers at the University of Washington has developed a shape-changing smart speaker, which uses self-deploying microphones to divide rooms into speech zones and track the positions of individual speakers. With the help of the team’s deep-learning algorithms, the system lets users mute certain areas or separate simultaneous conversations, even if two adjacent people have similar voices. Like a fleet of Roombas, each about an inch in diameter, the microphones automatically deploy from, and then return to, a charging station. This allows the system to be moved between environments and set up automatically. In a conference room meeting, for instance, such a system might be deployed instead of a central microphone, allowing better control of in-room audio.

The team published its findings Sept. 21 [2023] in Nature Communications.

A September 21, 2023 University of Washington (state) news release (also on EurekAlert), which originated the news item, delves further into the work, Note: Links have been removed,

“If I close my eyes and there are 10 people talking in a room, I have no idea who’s saying what and where they are in the room exactly. That’s extremely hard for the human brain to process. Until now, it’s also been difficult for technology,” said co-lead author Malek Itani, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “For the first time, using what we’re calling a robotic ‘acoustic swarm,’ we’re able to track the positions of multiple people talking in a room and separate their speech.”

Previous research on robot swarms has required using overhead or on-device cameras, projectors or special surfaces. The UW team’s system is the first to accurately distribute a robot swarm using only sound.

The team’s prototype consists of seven small robots that spread themselves across tables of various sizes. As they move from their charger, each robot emits a high frequency sound, like a bat navigating, using this frequency and other sensors to avoid obstacles and move around without falling off the table. The automatic deployment allows the robots to place themselves for maximum accuracy, permitting greater sound control than if a person set them. The robots disperse as far from each other as possible since greater distances make differentiating and locating people speaking easier. Today’s consumer smart speakers have multiple microphones, but clustered on the same device, they’re too close to allow for this system’s mute and active zones.

“If I have one microphone a foot away from me, and another microphone two feet away, my voice will arrive at the microphone that’s a foot away first. If someone else is closer to the microphone that’s two feet away, their voice will arrive there first,” said co-lead author Tuochao Chen, a UW doctoral student in the Allen School. “We developed neural networks that use these time-delayed signals to separate what each person is saying and track their positions in a space. So you can have four people having two conversations and isolate any of the four voices and locate each of the voices in a room.”

The team tested the robots in offices, living rooms and kitchens with groups of three to five people speaking. Across all these environments, the system could discern different voices within 1.6 feet (50 centimeters) of each other 90% of the time, without prior information about the number of speakers. The system was able to process three seconds of audio in 1.82 seconds on average — fast enough for live streaming, though a bit too long for real-time communications such as video calls.

As the technology progresses, researchers say, acoustic swarms might be deployed in smart homes to better differentiate people talking with smart speakers. That could potentially allow only people sitting on a couch, in an “active zone,” to vocally control a TV, for example.

Researchers plan to eventually make microphone robots that can move around rooms, instead of being limited to tables. The team is also investigating whether the speakers can emit sounds that allow for real-world mute and active zones, so people in different parts of a room can hear different audio. The current study is another step toward science fiction technologies, such as the “cone of silence” in “Get Smart” and“Dune,” the authors write.

Of course, any technology that evokes comparison to fictional spy tools will raise questions of privacy. Researchers acknowledge the potential for misuse, so they have included guards against this: The microphones navigate with sound, not an onboard camera like other similar systems. The robots are easily visible and their lights blink when they’re active. Instead of processing the audio in the cloud, as most smart speakers do, the acoustic swarms process all the audio locally, as a privacy constraint. And even though some people’s first thoughts may be about surveillance, the system can be used for the opposite, the team says.

“It has the potential to actually benefit privacy, beyond what current smart speakers allow,” Itani said. “I can say, ‘Don’t record anything around my desk,’ and our system will create a bubble 3 feet around me. Nothing in this bubble would be recorded. Or if two groups are speaking beside each other and one group is having a private conversation, while the other group is recording, one conversation can be in a mute zone, and it will remain private.”

Takuya Yoshioka, a principal research manager at Microsoft, is a co-author on this paper, and Shyam Gollakota, a professor in the Allen School, is a senior author. The research was funded by a Moore Inventor Fellow award.

Two of the paper`s authors, Malek Itani and Tuochao Chen, have written a ‘Behind the Paper’ article for Nature.com’s Electrical and Electronic Engineering Community, from their September 21, 2023 posting,

Sound is a versatile medium. In addition to being one of the primary means of communication for us humans, it serves numerous purposes for organisms across the animal kingdom. Particularly, many animals use sound to localize themselves and navigate in their environment. Bats, for example, emit ultrasonic sound pulses to move around and find food in the dark. Similar behavior can be observed in Beluga whales to avoid obstacles and locate one other.

Various animals also have a tendency to cluster together into swarms, forming a unit greater than the sum of its parts. Famously, bees agglomerate into swarms to more efficiently search for a new colony. Birds flock to evade predators. These behaviors have caught the attention of scientists for quite some time, inspiring a handful of models for crowd control, optimization and even robotics. 

A key challenge in building robot swarms for practical purposes is the ability for the robots to localize themselves, not just within the swarm, but also relative to other important landmarks. …

Here’s a link to and a citation for the paper,

Creating speech zones with self-distributing acoustic swarms by Malek Itani, Tuochao Chen, Takuya Yoshioka & Shyamnath Gollakota. Nature Communications volume 14, Article number: 5684 (2023) DOI: https://doi.org/10.1038/s41467-023-40869-8 Published: 21 September 2023

This paper is open access.

Robot that can maneuver through living lung tissue

Caption: Overview of the semiautonomous medical robot’s three stages in the lungs. Credit: Kuntz et al.

This looks like one robot operating on another robot; I guess the researchers want to emphasize the fact that this autonomous surgical procedure isn’t currently being tested on human beings.

There’s more in a September 21, 2023 news item on ScienceDaily,

Scientists have shown that their steerable lung robot can autonomously maneuver the intricacies of the lung, while avoiding important lung structures.

Lung cancer is the leading cause of cancer-related deaths in the United States. Some tumors are extremely small and hide deep within lung tissue, making it difficult for surgeons to reach them. To address this challenge, UNC -Chapel Hill and Vanderbilt University researchers have been working on an extremely bendy but sturdy robot capable of traversing lung tissue.

Their research has reached a new milestone. In a new paper, published in Science Robotics, Ron Alterovitz, PhD, in the UNC Department of Computer Science, and Jason Akulian, MD MPH, in the UNC Department of Medicine, have proven that their robot can autonomously go from “Point A” to “Point B” while avoiding important structures, such as tiny airways and blood vessels, in a living laboratory model.

Thankfully there’s a September 21, 2023 University of North Carolina (UNC) news release (also on EurekAlert), which originated the news item, to provide more information, Note: Links have been removed,

“This technology allows us to reach targets we can’t otherwise reach with a standard or even robotic bronchoscope,” said Dr. Akulian, co-author on the paper and Section Chief of Interventional Pulmonology and Pulmonary Oncology in the UNC Division of Pulmonary Disease and Critical Care Medicine. “It gives you that extra few centimeters or few millimeters even, which would help immensely with pursuing small targets in the lungs.”

The development of the autonomous steerable needle robot leveraged UNC’s highly collaborative culture by blending medicine, computer science, and engineering expertise. In addition to Alterovitz and Akulian, the development effort included Yueh Z. Lee, MD, PhD, at the UNC Department of Radiology, as well as Robert J. Webster III at Vanderbilt University and Alan Kuntz at the University of Utah.

The robot is made of several separate components. A mechanical control provides controlled thrust of the needle to go forward and backward and the needle design allows for steering along curved paths. The needle is made from a nickel-titanium alloy and has been laser etched to increase its flexibility, allowing it to move effortlessly through tissue.

As it moves forward, the etching on the needle allows it to steer around obstacles with ease. Other attachments, such as catheters, could be used together with the needle to perform procedures such as lung biopsies.

To drive through tissue, the needle needs to know where it is going. The research team used CT scans of the subject’s thoracic cavity and artificial intelligence to create three-dimensional models of the lung, including the airways, blood vessels, and the chosen target. Using this 3-D model and once the needle has been positioned for launch, their AI-driven software instructs it to automatically travel from “Point A” to “Point B” while avoiding important structures.

“The autonomous steerable needle we’ve developed is highly compact, but the system is packed with a suite of technologies that allow the needle to navigate autonomously in real-time,” said Alterovitz, the principal investigator on the project and senior author on the paper. “It’s akin to a self-driving car, but it navigates through lung tissue, avoiding obstacles like significant blood vessels as it travels to its destination.”

The needle can also account for respiratory motion. Unlike other organs, the lungs are constantly expanding and contracting in the chest cavity. This can make targeting especially difficult in a living, breathing subject. According to Akulian, it’s like shooting at a moving target.

The researchers tested their robot while the laboratory model performed intermittent breath holding. Every time the subject’s breath is held, the robot is programmed to move forward.

“There remain some nuances in terms of the robot’s ability to acquire targets and then actually get to them effectively,” said Akulian, who is also a member of the UNC Lineberger Comprehensive Cancer Center, “and while there’s still a lot of work to be done, I’m very excited about continuing to push the boundaries of what we can do for patients with the world-class experts that are here.”

“We plan to continue creating new autonomous medical robots that combine the strengths of robotics and AI to improve medical outcomes for patients facing a variety of health challenges while providing guarantees on patient safety,” added Alterovitz.

Here’s a link to and a citation for the paper,

Autonomous medical needle steering in vivo by Alan Kuntz, Maxwell Emerson, Tayfun Efe Ertop, Inbar Fried, Mengyu Fu, Janine Hoelscher, Margaret Rox, Jason Akulian, Erin A. Gillaspie, Yueh Z. Lee, Fabien Maldonado, Robert J. Webster III, and Ron Alterovitz. Science Robotics 20 Sep 2023 Vol 8, Issue 82 DOI: 10.1126/scirobotics.adf7614

This paper is behind a paywall.

An artificial, multisensory integrated neuron makes AI (artificial intelligence) smarter

More brainlike (neuromorphic) computing but this time, it’s all about the senses. From a September 15, 2023 news item on ScienceDaily, Note: A link has been removed,

The feel of a cat’s fur can reveal some information, but seeing the feline provides critical details: is it a housecat or a lion? While the sound of fire crackling may be ambiguous, its scent confirms the burning wood. Our senses synergize to give a comprehensive understanding, particularly when individual signals are subtle. The collective sum of biological inputs can be greater than their individual contributions. Robots tend to follow more straightforward addition, but researchers have now harnessed the biological concept for application in artificial intelligence (AI) to develop the first artificial, multisensory integrated neuron.

Led by Saptarshi Das, associate professor of engineering science and mechanics at Penn State, the team published their work today (Sept. 15 [2023]) in Nature Communications.

A September 12, 2023 Pennsylvania State University (Penn State) news release (also on EurekAlert but published September 15, 2023) by Ashley WennersHerron, which originated the news item, provides more detail about the research,

“Robots make decisions based on the environment they are in, but their sensors do not generally talk to each other,” said Das, who also has joint appointments in electrical engineering and in materials science and engineering. “A collective decision can be made through a sensor processing unit, but is that the most efficient or effective method? In the human brain, one sense can influence another and allow the person to better judge a situation.”

For instance, a car might have one sensor scanning for obstacles, while another senses darkness to modulate the intensity of the headlights. Individually, these sensors relay information to a central unit which then instructs the car to brake or adjust the headlights. According to Das, this process consumes more energy. Allowing sensors to communicate directly with each other can be more efficient in terms of energy and speed — particularly when the inputs from both are faint.

“Biology enables small organisms to thrive in environments with limited resources, minimizing energy consumption in the process,” said Das, who is also affiliated with the Materials Research Institute. “The requirements for different sensors are based on the context — in a dark forest, you’d rely more on listening than seeing, but we don’t make decisions based on just one sense. We have a complete sense of our surroundings, and our decision making is based on the integration of what we’re seeing, hearing, touching, smelling, etcetera. The senses evolved together in biology, but separately in AI. In this work, we’re looking to combine sensors and mimic how our brains actually work.”

The team focused on integrating a tactile sensor and a visual sensor so that the output of one sensor modifies the other, with the help of visual memory. According to Muhtasim Ul Karim Sadaf, a third-year doctoral student in engineering science and mechanics, even a short-lived flash of light can significantly enhance the chance of successful movement through a dark room.

“This is because visual memory can subsequently influence and aid the tactile responses for navigation,” Sadaf said. “This would not be possible if our visual and tactile cortex were to respond to their respective unimodal cues alone. We have a photo memory effect, where light shines and we can remember. We incorporated that ability into a device through a transistor that provides the same response.”

The researchers fabricated the multisensory neuron by connecting a tactile sensor to a phototransistor based on a monolayer of molybdenum disulfide, a compound that exhibits unique electrical and optical characteristics useful for detecting light and supporting transistors. The sensor generates electrical spikes in a manner reminiscent of neurons processing information, allowing it to integrate both visual and tactile cues.

It’s the equivalent of seeing an “on” light on the stove and feeling heat coming off of a burner — seeing the light on doesn’t necessarily mean the burner is hot yet, but a hand only needs to feel a nanosecond of heat before the body reacts and pulls the hand away from the potential danger. The input of light and heat triggered signals that induced the hand’s response. In this case, the researchers measured the artificial neuron’s version of this by seeing signaling outputs resulted from visual and tactile input cues.

To simulate touch input, the tactile sensor used triboelectric effect, in which two layers slide against one another to produce electricity, meaning the touch stimuli was encoded into electrical impulses. To simulate visual input, the researchers shined a light into the monolayer molybdenum disulfide photo memtransistor — or a transistor that can remember visual input, like how a person can hold onto the general layout of a room after a quick flash illuminates it.

They found that the sensory response of the neuron — simulated as electrical output — increased when both visual and tactile signals were weak.

“Interestingly, this effect resonates remarkably well with its biological counterpart — a visual memory naturally enhances the sensitivity to tactile stimulus,” said co-first author Najam U Sakib, a third-year doctoral student in engineering science and mechanics. “When cues are weak, you need to combine them to better understand the information, and that’s what we saw in the results.”

Das explained that an artificial multisensory neuron system could enhance sensor technology’s efficiency, paving the way for more eco-friendly AI uses. As a result, robots, drones and self-driving vehicles could navigate their environment more effectively while using less energy.

“The super additive summation of weak visual and tactile cues is the key accomplishment of our research,” said co-author Andrew Pannone, a fourth-year doctoral student in engineering science and mechanics. “For this work, we only looked into two senses. We’re working to identify the proper scenario to incorporate more senses and see what benefits they may offer.”

Harikrishnan Ravichandran, a fourth-year doctoral student in engineering science and mechanics at Penn State, also co-authored this paper.

The Army Research Office and the National Science Foundation supported this work.

Here’s a link to and a citation for the paper,

A bio-inspired visuotactile neuron for multisensory integration by Muhtasim Ul Karim Sadaf, Najam U Sakib, Andrew Pannone, Harikrishnan Ravichandran & Saptarshi Das. Nature Communications volume 14, Article number: 5729 (2023) DOI: https://doi.org/10.1038/s41467-023-40686-z Published: 15 September 2023

This paper is open access.

Purifying DNA origami nanostructures with a LEGO robot

This July 20, 2023 article by Bob Yirka for phys.org highlights some frugal science, Note: A link has been removed,

A team of bioengineers at Arizona State University has found a way to use a LEGO robot as a gradient mixer in one part of a process to create DNA origami nanostructures. In their paper published on the open-access site PLOS [Public Library of Science] ONE, the group describes how they made their mixer and its performance.

To create DNA origami structures, purification of DNA [deoxyribonucleic acid] origami nanostructures is required. This is typically done using rate-zone centrifugation, which involves the use of a relatively expensive piece of a machinery, a gradient mixer. In this new effort, the team at ASU has found that it is possible to build such a mixer using off-the-shelf LEGO kits.

I found a video provided by MindSpark Media describing the process on YouTube,

I’d love to know who paid for the video and why. This is pretty slick and it’s not from the Arizona State University’s (ASU) media team.

It gets more interesting on the MindSpark Media About webpage,

MindSpark Media is an independent media unit focusing on all major Media & Marketing services that includes Media Buying and Selling activities, bringing out special features on various supplements/country reports and international features on topics of interest in association with various leading English & Arabic vernaculars in the UAE [United Arab Emirates] and across MENA [Middle East and North Africa].

MindSpark Media is a complete media-selling experience that offers its clientele a wholesome exposure to the best media brands in the country. We also offer an opportunity to meet up and interact with the top brass of the industry & corporates for their advertorial packages including one-to-one interviews with photo-shoot sessions etc.

MindSpark Media delivers client-tailored advertorials that includes their product advertisements, features and interviews published in the form of special reports, supplements & special features, which are released and distributed with top-notch publications in the UAE.

We also focus on advertising activities in the media-buying sector such as Print, Outdoor, TV, Radio and Corporate Video, e-commerce & web-designing for clients in the UAE, MENA and beyond.

Perhaps the researchers are hoping to commercialize the work in some fashion? I couldn’t find any mention of a startup or other commercial entity but it’s a common practice these days in the US and, increasingly, many other countries.

Getting back to the research, here’s a link to and a citation for the paper,

Gradient-mixing LEGO robots for purifying DNA origami nanostructures of multiple components by rate-zonal centrifugation by Jason Sentosa, Franky Djutanta, Brian Horne, Dominic Showkeir, Robert Rezvani, Chloe Leff, Swechchha Pradhan, Rizal F. Hariadi. PLOS ONE (2023). DOI: 10.1371/journal.pone.0283134 Published: July 19, 2023

This paper is open access.

Big Conversation Season (podcast) Finale on ‘AI and the Future of Humanity’ available on Friday, September 22, 2023

Three guys (all Brits) talking about this question “Robot Race: Could AI Ever Replace Humanity (part 1)” is part of a larger video podcast series known as the ‘Big Conversation’ and part 2 of this ‘Big Conversation’ is going to be available on Friday, September 22, 2023.

I haven’t listened to the entire first part of the conversation yet. So far, it seems quite engaging and provocative (especially the first five minutes). They’re not arguing but since I don’t want to spoil the surprise do watch the first bit (the first 5 mins. of a 53 mins. 38 secs. podcast).

You can’t ask more of a conversation than to be provoked into thinking. That said …

Pause

I’m a little hesitant to include much about faith and religion here but this two-part series touches on topics that have been discussed here many times. So, the ‘Big Conversation’ is produced through a Christian group. Here’s more about the podcast series and its producers from the Big Conversation webpage,

he Big Conversation is a video series from Premier Unbelievable? featuring world-class thinkers across the religious and non-religious communities. Exploring science, faith, philosophy and what it means to be human [emphasis mine]. The Big Conversation is produced by Premier in partnership with John Templeton Foundation.

Premier consists of Premier Christian Media Trust registered as a charity (no. 287610) and as a company limited by guarantee (no. 01743091) with two fully-owned trading subsidiaries: Premier Christian Communications Ltd (no. 02816074) and Christian Communication Partnership Ltd (no. 03422292). All three companies are registered in England & Wales with a registered office address of Unit 6 April Court, Syborn Way, Crowborough, TN6 3DZ.

I haven’t seen any signs of proselytizing and like almost every other website in existence, they are very interested in getting you to be on their newsletter email list, to donate, etc.

Back to the conversation.

The Robot Race, Parts I & 2: Could AI ever replace humanity?

Here’s a description of the Big Conversation series and two specific podcasts, from the September 20, 2023 press release (received via email),

Big Conversation Season Finale on AI and the Future of Humanity Available this Friday

Featuring AI expert Dr. Nigel Crook, episode explores ‘The Robot Race: Could AI ever replace humans?’

WHAT: 
Currently in its 5th season, The Big Conversation, hosted by comedian and apologist Andy Kind, features some of the biggest minds in the Christian, atheist and religious world to debate some of the biggest questions of science, faith, philosophy and what it means to be human. 

Episodes 5 & 6 of this season feature a two-part discussion about robotics, the future of artificial intelligence and the subsequent concerns of morality surrounding these advancements. This thought-provoking exchange on ethics in AI is sure to leave listeners informed and intrigued to learn more regarding the future of humanity relating to cyber-dependency, automation regulation, AI agency and abuses of power in technology.

WHO:  
To help us understand the complexities of AI, including the power and ethics around the subject – and appropriate concern for the future of humanity – The Big Conversation host Andy Kind spoke with AI Expert Dr. Nigel Crook and Neuroscientist Anil Seth.   

Dr. Nigel Crook, a distinguished figure recognized for his innovative contributions to the realm of AI and robotics, focuses extensively on research related to machine learning inspired by biological processes and the domain of social robotics. He serves as the Professor of Artificial Intelligence and Robotics at Oxford Brooks University and is the Founding Director at the Institute for Ethical AI, specifically revolving around the concept of self-governing ethical robots.

WHEN:  
Episode 5, the first in the two-part AI series, released September 8 [2023], and episode 6 releases Friday, Sept. 22 [2023].  

WHERE:  
These episodes are available at https://www.thebigconversation.show/ as well as all major podcast platforms.  

I have a little more about Anil Seth from the Big Conversation Episode 5 webpage,

… Anil Seth, Professor of Cognitive & Computational Neuroscience at the University of Sussex, winner of The Michael Faraday Prize and Lecture 2023, and author of “Being You: A New Science of Consciousness”

There’s also a bit about Seth in my June 30, 2017 posting “A question of consciousness: Facebotlish (a new language); a July 5, 2017 rap guide performance in Vancouver, Canada; Tom Stoppard’s play; and a little more,” scroll down to the subhead titled ‘Vancouver premiere of Baba Brinkman’s Rap Guide to Consciousness’.

Single chip mimics human vision and memory abilities

A June 15, 2023 RMIT University (Australia) press release (also on EurekAlert but published June 14, 2023) announces a neuromorphic (brainlike) computer chip, which mimics human vision and ‘creates’ memories,

Researchers have created a small device that ‘sees’ and creates memories in a similar way to humans, in a promising step towards one day having applications that can make rapid, complex decisions such as in self-driving cars.

The neuromorphic invention is a single chip enabled by a sensing element, doped indium oxide, that’s thousands of times thinner than a human hair and requires no external parts to operate.

RMIT University engineers in Australia led the work, with contributions from researchers at Deakin University and the University of Melbourne.

The team’s research demonstrates a working device that captures, processes and stores visual information. With precise engineering of the doped indium oxide, the device mimics a human eye’s ability to capture light, pre-packages and transmits information like an optical nerve, and stores and classifies it in a memory system like the way our brains can.

Collectively, these functions could enable ultra-fast decision making, the team says.

Team leader Professor Sumeet Walia said the new device can perform all necessary functions – sensing, creating and processing information, and retaining memories – rather than relying on external energy-intensive computation, which prevents real-time decision making.

“Performing all of these functions on one small device had proven to be a big challenge until now,” said Walia from RMIT’s School of Engineering.

“We’ve made real-time decision making a possibility with our invention, because it doesn’t need to process large amounts of irrelevant data and it’s not being slowed down by data transfer to separate processors.”

What did the team achieve and how does the technology work?

The new device was able to demonstrate an ability to retain information for longer periods of time, compared to previously reported devices, without the need for frequent electrical signals to refresh the memory. This ability significantly reduces energy consumption and enhances the device’s performance.

Their findings and analysis are published in Advanced Functional Materials.

First author and RMIT PhD researcher Aishani Mazumder said the human brain used analog processing, which allowed it to process information quickly and efficiently using minimal energy.

“By contrast, digital processing is energy and carbon intensive, and inhibits rapid information gathering and processing,” she said.

“Neuromorphic vision systems are designed to use similar analog processing to the human brain, which can greatly reduce the amount of energy needed to perform complex visual tasks compared with today’s technologies

What are the potential applications?

The team used ultraviolet light as part of their experiments, and are working to expand this technology even further for visible and infrared light – with many possible applications such as bionic vision, autonomous operations in dangerous environments, shelf-life assessments of food and advanced forensics.

“Imagine a self-driving car that can see and recognise objects on the road in the same way that a human driver can or being able to able to rapidly detect and track space junk. This would be possible with neuromorphic vision technology.”

Walia said neuromorphic systems could adapt to new situations over time, becoming more efficient with more experience.

“Traditional computer vision systems – which cannot be miniaturised like neuromorphic technology – are typically programmed with specific rules and can’t adapt as easily,” he said.

“Neuromorphic robots have the potential to run autonomously for long periods, in dangerous situations where workers are exposed to possible cave-ins, explosions and toxic air.”

The human eye has a single retina that captures an entire image, which is then processed by the brain to identify objects, colours and other visual features.

The team’s device mimicked the retina’s capabilities by using single-element image sensors that capture, store and process visual information on one platform, Walia said.

“The human eye is exceptionally adept at responding to changes in the surrounding environment in a faster and much more efficient way than cameras and computers currently can,” he said.

“Taking inspiration from the eye, we have been working for several years on creating a camera that possesses similar abilities, through the process of neuromorphic engineering.” 

Here’s a link to and a citation for the paper,

Long Duration Persistent Photocurrent in 3 nm Thin Doped Indium Oxide for Integrated Light Sensing and In-Sensor Neuromorphic Computation by Aishani Mazumder, Chung Kim Nguyen, Thiha Aung, Mei Xian Low, Md. Ataur Rahman, Salvy P. Russo, Sherif Abdulkader Tawfik, Shifan Wang, James Bullock, Vaishnavi Krishnamurthi. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.202303641 First published: 14 June 2023

This paper is open access.

Dealing with mosquitos: a robot story and an engineered human tissue story

I have two ‘mosquito and disease’ stories, the first concerning dengue fever and the second, malaria.

Dengue fever in Taiwan

A June 8, 2023 news item on phys.org features robotic vehicles, dengue fever, and mosquitoes,

Unmanned ground vehicles can be used to identify and eliminate the breeding sources of mosquitos that carry dengue fever in urban areas, according to a new study published in PLOS Neglected Tropical Diseases by Wei-Liang Liu of the Taiwan National Mosquito-Borne Diseases Control Research Center, and colleagues.

It turns out sewers are a problem according to this June 8, 2023 PLOS (Public Library of Science) news release on EurekAlert, provides more context and detail,

Dengue fever is an infectious disease caused by the dengue virus and spread by several mosquito species in the genus Aedes, which also spread chikungunya, yellow fever and zika. Through the process of urbanization, sewers have become easy breeding grounds for Aedes mosquitos and most current mosquito monitoring programs struggle to monitor and analyze the density of mosquitos in these hidden areas.

In the new control effort, researchers combined a crawling robot, wire-controlled cable car and real-time monitoring system into an unmanned ground vehicle system (UGV) that can take high-resolution, real-time images of areas within sewers. From May to August 2018, the system was deployed in five administrative districts in Kaohsiung city, Taiwan, with covered roadside sewer ditches suspected to be hotspots for mosquitos. Mosquito gravitraps were places above the sewers to monitor effects of the UGV intervention on adult mosquitos in the area.

In 20.7% of inspected sewers, the system found traces of Aedes mosquitos in stages from larvae to adult. In positive sewers, additional prevention control measures were carried out, using either insecticides or high-temperature water jets.  Immediately after these interventions, the gravitrap index (GI)—  a measure of the adult mosquito density nearby— dropped significantly from 0.62 to 0.19.

“The widespread use of UGVs can potentially eliminate some of the breeding sources of vector mosquitoes, thereby reducing the annual prevalence of dengue fever in Kaohsiung city,” the authors say.

Here’s a link to and a citation for the paper,

Use of unmanned ground vehicle systems in urbanized zones: A study of vector Mosquito surveillance in Kaohsiung by Yu-Xuan Chen, Chao-Ying Pan, Bo-Yu Chen, Shu-Wen Jeng, Chun-Hong Chen, Joh-Jong Huang, Chaur-Dong Chen, Wei-Liang Liu. PLOS Neglected Tropical Diseases DOI: https://doi.org/10.1371/journal.pntd.0011346 Published: June 8, 2023

This paper is open access.

Dengue on the rise

Like many diseases, dengue is one where you may not have symptoms (asymptomatic), or they’re relatively mild and can be handled at home, or you may need care in a hospital and, in some cases, it can be fatal.

The World Health Organization (WHO) notes that dengue fever cases have increased exponentially since 2000 (from the March 17, 2023 version of the WHO’s “Dengue and severe dengue” fact sheet),

Global burden

The incidence of dengue has grown dramatically around the world in recent decades, with cases reported to WHO increased from 505 430 cases in 2000 to 5.2 million in 2019. A vast majority of cases are asymptomatic or mild and self-managed, and hence the actual numbers of dengue cases are under-reported. Many cases are also misdiagnosed as other febrile illnesses (1).

One modelling estimate indicates 390 million dengue virus infections per year of which 96 million manifest clinically (2). Another study on the prevalence of dengue estimates that 3.9 billion people are at risk of infection with dengue viruses.

The disease is now endemic in more than 100 countries in the WHO Regions of Africa, the Americas, the Eastern Mediterranean, South-East Asia and the Western Pacific. The Americas, South-East Asia and Western Pacific regions are the most seriously affected, with Asia representing around 70% of the global disease burden.

Dengue is spreading to new areas including Europe, [emphasis mine] and explosive outbreaks are occurring. Local transmission was reported for the first time in France and Croatia in 2010 [emphasis mine] and imported cases were detected in 3 other European countries.

The largest number of dengue cases ever reported globally was in 2019. All regions were affected, and dengue transmission was recorded in Afghanistan for the first time. The American Region reported 3.1 million cases, with more than 25 000 classified as severe. A high number of cases were reported in Bangladesh (101 000), Malaysia (131 000) Philippines (420 000), Vietnam (320 000) in Asia.

Dengue continues to affect Brazil, Colombia, the Cook Islands, Fiji, India, Kenya, Paraguay, Peru, the Philippines, the Reunion Islands and Vietnam as of 2021. 

There’s information from an earlier version of the fact sheet, in my July 2, 2013 posting, highlighting different aspects of the disease, e.g., “About 2.5% of those affected die.”

A July 21, 2023 United Nations press release warns that the danger from mosquitoes spreading dengue fever could increase along with the temperature,

Global warming marked by higher average temperatures, precipitation and longer periods of drought, could prompt a record number of dengue infections worldwide, the World Health Organization (WHO) warned on Friday [July 21, 2023].

Despite the absence of mosquitoes infected with the dengue virus in Canada, the government has a Dengue fever information page. At this point, the concern is likely focused on travelers who’ve contracted the disease from elsewhere. However, I am guessing that researchers are keeping a close eye on Canadian mosquitoes as these situations can change.

Malaria in Florida (US)

The researchers from the University of Central Florida (UCF) couldn’t have known when they began their project to study mosquito bites and disease that Florida would register its first malaria cases in 20 years this summer, from a July 26, 2023 article by Stephanie Colombini for NPR ([US] National Public Radio), Note: Links have been removed,

First local transmission in U.S. in 20 years

Heath [Hannah Heath] is one of eight known people in recent months who have contracted malaria in the U.S., after being bitten by a local mosquito, rather than while traveling abroad. The cases comprise the nation’s first locally transmitted outbreak in 20 years. The last time this occurred was in 2003, when eight people tested positive for malaria in Palm Beach, Fla.

One of the eight cases is in Texas; the rest occurred in the northern part of Sarasota County.

The Florida Department of Health recorded the most recent case in its weekly arbovirus report for July 9-15 [2023].

For the past month, health officials have issued a mosquito-borne illness alert for residents in Sarasota and neighboring Manatee County. Mosquito management teams are working to suppress the population of the type of mosquito that carries malaria, Anopheles.

Sarasota Memorial Hospital has treated five of the county’s seven malaria patients, according to Dr. Manuel Gordillo, director of infection control.

“The cases that are coming in are classic malaria, you know they come in with fever, body aches, headaches, nausea, vomiting, diarrhea,” Gordillo said, explaining that his hospital usually treats just one or two patients a year who acquire malaria while traveling abroad in Central or South America, or Africa.

All the locally acquired cases were of Plasmodium vivax malaria, a strain that typically produces milder symptoms or can even be asymptomatic, according to the Centers for Disease Control and Prevention. But the strain can still cause death, and pregnant people and children are particularly vulnerable.

Malaria does not spread from human-to-human contact; a mosquito carrying the disease has to bite someone to transmit the parasites.

Workers with Sarasota County Mosquito Management Services have been especially busy since May 26 [2023], when the first local case was confirmed.

Like similar departments across Florida, the team is experienced in responding to small outbreaks of mosquito-borne illnesses such as West Nile virus or dengue. They have protocols for addressing travel-related cases of malaria as well, but have ramped up their efforts now that they have confirmation that transmission is occurring locally between mosquitoes and humans.

While organizations like the World Health Organization have cautioned climate change could lead to more global cases and deaths from malaria and other mosquito-borne diseases, experts say it’s too soon to tell if the local transmission seen these past two months has any connection to extreme heat or flooding.

“We don’t have any reason to think that climate change has contributed to these particular cases,” said Ben Beard, deputy director of the CDC’s US Centers for Disease Control and Prevention] division of vector-borne diseases and deputy incident manager for this year’s local malaria response.

“In a more general sense though, milder winters, earlier springs, warmer, longer summers – all of those things sort of translate into mosquitoes coming out earlier, getting their replication cycles sooner, going through those cycles faster and being out longer,” he said. And so we are concerned about the impact of climate change and environmental change in general on what we call vector-borne diseases.”.

Beard co-authored a 2019 report that highlights a significant increase in diseases spread by ticks and mosquitoes in recent decades. Lyme disease and West Nile virus were among the top five most prevalent.

“In the big picture it’s a very significant concern that we have,” he said.

Engineered tissue and bloodthirsty mosquitoes

A June 8, 2023 University of Central Florida (UCF) news release (also on EurekAlert) by Eric Eraso describes the research into engineered human tissue and features a ‘bloodthirsty’ video. First, the video,

Note: A link has been removed,

A UCF research team has engineered tissue with human cells that mosquitoes love to bite and feed upon — with the goal of helping fight deadly diseases transmitted by the biting insects.

A multidisciplinary team led by College of Medicine biomedical researcher Bradley Jay Willenberg with Mollie Jewett (UCF Burnett School of Biomedical Sciences) and Andrew Dickerson (University of Tennessee) lined 3D capillary gel biomaterials with human cells to create engineered tissue and then infused it with blood. Testing showed mosquitoes readily bite and blood feed on the constructs. Scientists hope to use this new platform to study how pathogens that mosquitoes carry impact and infect human cells and tissues. Presently, researchers rely largely upon animal models and cells cultured on flat dishes for such investigations.

Further, the new system holds great promise for blood feeding mosquito species that have proven difficult to rear and maintain as colonies in the laboratory, an important practical application. The Willenberg team’s work was published Friday in the journal Insects.

Mosquitos have often been called the world’s deadliest animal, as vector-borne illnesses, including those from mosquitos cause more than 700,000 deaths worldwide each year. Malaria, dengue, Zika virus and West Nile virus are all transmitted by mosquitos. Even for those who survive these illnesses, many are left suffering from organ failure, seizures and serious neurological impacts.

“Many people get sick with mosquito-borne illnesses every year, including in the United States. The toll of such diseases can be especially devastating for many countries around the world,” Willenberg says.

This worldwide impact of mosquito-borne disease is what drives Willenberg, whose lab employs a unique blend of biomedical engineering, biomaterials, tissue engineering, nanotechnology and vector biology to develop innovative mosquito surveillance, control and research tools. He said he hopes to adapt his new platform for application to other vectors such as ticks, which spread Lyme disease.

“We have demonstrated the initial proof-of-concept with this prototype” he says. “I think there are many potential ways to use this technology.”

Captured on video, Willenberg observed mosquitoes enthusiastically blood feeding from the engineered tissue, much as they would from a human host. This demonstration represents the achievement of a critical milestone for the technology: ensuring the tissue constructs were appetizing to the mosquitoes.

“As one of my mentors shared with me long ago, the goal of physicians and biomedical researchers is to help reduce human suffering,” he says. “So, if we can provide something that helps us learn about mosquitoes, intervene with diseases and, in some way, keep mosquitoes away from people, I think that is a positive.”

Willenberg came up with the engineered tissue idea when he learned the National Institutes of Health (NIH) was looking for new in vitro 3D models that could help study pathogens that mosquitoes and other biting arthropods carry.

“When I read about the NIH seeking these models, it got me thinking that maybe there is a way to get the mosquitoes to bite and blood feed [on the 3D models] directly,” he says. “Then I can bring in the mosquito to do the natural delivery and create a complete vector-host-pathogen interface model to study it all together.”

As this platform is still in its early stages, Willenberg wants to incorporate addition types of cells to move the system closer to human skin. He is also developing collaborations with experts that study pathogens and work with infected vectors, and is working with mosquito control organizations to see how they can use the technology.

“I have a particular vision for this platform, and am going after it. My experience too is that other good ideas and research directions will flourish when it gets into the hands of others,” he says. “At the end of the day, the collective ideas and efforts of the various research communities propel a system like ours to its full potential. So, if we can provide them tools to enable their work, while also moving ours forward at the same time, that is really exciting.”

Willenberg received his Ph.D. in biomedical engineering from the University of Florida and continued there for his postdoctoral training and then in scientist, adjunct scientist and lecturer positions. He joined the UCF College of Medicine in 2014, where he is currently an assistant professor of medicine.

Willenberg is also a co-founder, co-owner and manager of Saisijin Biotech, LLC and has a minor ownership stake in Sustained Release Technologies, Inc. Neither entity was involved in any way with the work presented in this story. Team members may also be listed as inventors on patent/patent applications that could result in royalty payments. This technology is available for licensing. To learn more, please visit ucf.flintbox.com/technologies/44c06966-2748-4c14-87d7-fc40cbb4f2c6.

Here’s a link to and a citation for the paper,

Engineered Human Tissue as A New Platform for Mosquito Bite-Site Biology Investigations by Corey E. Seavey, Mona Doshi, Andrew P. Panarello, Michael A. Felice, Andrew K. Dickerson, Mollie W. Jewett and Bradley J. Willenberg. Insects 2023, 14(6), 514; https://doi.org/10.3390/insects14060514 Published: 2 June 2023

This paper is open access.

That final paragraph in the news release is new to me. I’ve seen them list companies where the researchers have financial interests but this is the first time I’ve seen a news release that offers a statement attempting to cover all the bases including some future possibilities such as: “Team members may also be listed as inventors on patent/patent applications that could result in royalty payments.

It seems pretty clear that there’s increasing concern about mosquito-borne diseases no matter where you live.