A July 30, 2020 news item on ScienceDaily announces news about more accurate health monitoring with electronics applied directly to your skin,
A team of researchers led by Cunjiang Yu, Bill D. Cook Associate Professor of Mechanical Engineering at the University of Houston, has developed a new form of electronics known as “drawn-on-skin electronics,” allowing multifunctional sensors and circuits to be drawn on the skin with an ink pen.
The advance, the researchers report in Nature Communications, allows for the collection of more precise, motion artifact-free health data, solving the long-standing problem of collecting precise biological data through a wearable device when the subject is in motion.
The imprecision may not be important when your FitBit registers 4,000 steps instead of 4,200, but sensors designed to check heart function, temperature and other physical signals must be accurate if they are to be used for diagnostics and treatment.
The drawn-on-skin electronics are able to seamlessly collect data, regardless of the wearer’s movements.
They also offer other advantages, including simple fabrication techniques that don’t require dedicated equipment.
“It is applied like you would use a pen to write on a piece of paper,” said Yu. “We prepare several electronic materials and then use pens to dispense them. Coming out, it is liquid. But like ink on paper, it dries very quickly.”
Wearable bioelectronics – in the form of soft, flexible patches attached to the skin – have become an important way to monitor, prevent and treat illness and injury by tracking physiological information from the wearer. But even the most flexible wearables are limited by motion artifacts, or the difficulty that arises in collecting data when the sensor doesn’t move precisely with the skin.
The drawn-on-skin electronics can be customized to collect different types of information, and Yu said it is expected to be especially useful in situations where it’s not possible to access sophisticated equipment, including on a battleground.
The electronics are able to track muscle signals, heart rate, temperature and skin hydration, among other physical data, he said. The researchers also reported that the drawn-on-skin electronics have demonstrated the ability to accelerate healing of wounds.
In addition to Yu, researchers involved in the project include Faheem Ershad, Anish Thukral, Phillip Comeaux, Yuntao Lu, Hyunseok Shim, Kyoseung Sim, Nam-In Kim, Zhoulyu Rao, Ross Guevara, Luis Contreras, Fengjiao Pan, Yongcao Zhang, Ying-Shi Guan, Pinyi Yang, Xu Wang and Peng Wang, all from the University of Houston, and Jiping Yue and Xiaoyang Wu from the University of Chicago.
The drawn-on-skin electronics are actually comprised of three inks, serving as a conductor, semiconductor and dielectric.
“Electronic inks, including conductors, semiconductors, and dielectrics, are drawn on-demand in a freeform manner to develop devices, such as transistors, strain sensors, temperature sensors, heaters, skin hydration sensors, and electrophysiological sensors,” the researchers wrote.
This research is supported by the Office of Naval Research and National Institutes of Health.
Have you wondered about Zoom video conferencing and all that data being made available? Perhaps questioned ethical issues in addition to those associated with data security? Is so and you’d like to come up with a creative intervention that delves beyond encryption issues, there’s Zoom Obscura (on the creativeinformatics.org website),
CI [Creative Informatics] researchers Pip Thornton, Chris Elsden and Chris Speed were recently awarded funding from the Human Data Interaction Network (HDI +) Ethics & Data competition. Collaborating with researchers from Durham [Durham University] and KCL [Kings College London], the Zoom Obscura project aims to investigate creative interventions for a data ethics of video conferencing beyond encryption.
The COVID-19 pandemic has gifted video conferencing companies, such as Zoom, with a vast amount of economically valuable and sensitive data such as our facial and voice biometrics, backgrounds and chat scripts. Before the pandemic, this ‘new normal’ would be subject to scrutiny, scepticism and critique. Yet, the urgent need for remote working and socialising left us with little choice but to engage with these potentially exploitative platforms.
While much of the narrative around data security revolves around technological ‘solutions’ such as encryption, we think there are other – more creative – ways to push back against the systems of digital capitalism that continue to encroach on our everyday lives.
As part of this HDI-funded project, we seek artists, hackers and creative technologists who are interested in experimenting with creative methods to join us in a series of online workshops that will explore how to restore some control and agency in how we can be seen and heard in these newly ubiquitous online spaces. Through three half-day workshops held remotely, we will bring artists and technicians together to ideate, prototype, and exhibit various interventions into the rapidly normalising culture of video-calling in ways that do not compromise our privacy and limit the sharing of our data. We invite interventions that begin at any stage of the video-calling process – from analogue obfuscation, to software manipulation or camera trickery.
Selected artists/collectives will receive a £1000 commission to take part and contribute in three workshops, in order to design and produce one or more, individual or collaborative, creative interventions developed from the workshops. These will include both technical support from a creative technologist as well as a curator for dissemination both online and in Edinburgh and London.
If you are an artist / technologist interested in disrupting/subverting the pandemic-inspired digital status quo, please send expressions of interest of no more than 500 words to pip.thornton@ed.ac.uk , andrew.dwyer@bristol.ac.uk, celsden@ed.ac.uk and michael.duggan@kcl.ac.uk by 8th October 2020. We don’t expect fully formed projects (these will come in the workshop sessions), but please indicate any broad ideas and thoughts you have, and highlight how your past and present practice might be a good fit for the project and its aims.
The Zoom Obscura project is in collaboration with Tinderbox Lab in Edinburgh and Hannah Redler-Hawes (independent curator and codirector of the Data as Culture art programme at the Open Data Institute in London). Outputs from the project will be hosted and exhibited via the Data as Culture archive site and at a Creative Informatics event at the University of Edinburgh.
Are folks outside the UK eligible?
I asked Dr. Pip Thornton about eligibility and she kindly noted this in her Sept. 25, 2020 tweet (reply copied from my Twitter feed),
Open to all, but workshop timings may be more amenable to UK working hours. Having said that, we won’t know what the critical mass is until we review all the applications, so please do apply if you’re interested!
Who are the members of the Zoom Obscura project team?
From the Zoom Obscura webpage (on the creativeinformatics.org website),
Dr. Pip Thornton is a post-doctoral research associate in Creative Informatics at the University of Edinburgh, having recently gained her PhD in Geopolitics and Cybersecurity from Royal Holloway, University of London. Her thesis, Language in the Age of Algorithmic Reproduction: A Critique of Linguistic Capitalism, included theoretical, political and artistic critiques of Google’s search and advertising platforms. She has presented in a variety of venues including the Science Museum, the Alan Turing Institute and transmediale. Her work has featured in WIRED UK and New Scientist, and a collection from her {poem}.py intervention has been displayed at Open Data Institute in London. Her Edinburgh Futures Institute (EFI) funded installation Newspeak 2019, shown at the Edinburgh Festival Fringe (2019), was recently awarded an honourable mention in the Surveillance Studies Network biennial art competition (2020) and is shortlisted for the 2020 Lumen Prize for art and technology in the AI category.
Dr. Andrew Dwyer is a research associate in the University of Bristol’s Cyber Security Group. Andrew gained a DPhil in Cyber Security at the University of Oxford, where he studied and questioned the role of malware – commonly known as computational viruses and worms – through its analysis, detection, and translation into international politics and its intersection with multiple ecologies. In his doctoral thesis – Malware Ecologies: A Politics of Cybersecurity – he argued for a re-evaluation of the role of computational actors in the production and negotiation of security, and what this means for human-centred notions of weapons and warfare. Previously, Andrew has been a visiting fellow at the German ‘Dynamics of Security’ collaborative research centre based between Philipps-Universität Marburg, Justus-Liebig-Universität Gießen and the Herder Institute, Marburg and is a Research Affiliate at the Centre for Technology and Global Affairs at the University of Oxford. He will soon be starting a 3-year Addison Wheeler research fellowship in the Department of Geography at the Durham University
Dr Chris Elsden is a research associate in Design Informatics at the University of Edinburgh. Chris is primarily working on the AHRC Creative Informatics project., with specific interests in FinTech and livestreaming within the Creative Industries. He is an HCI researcher, with a background in sociology, and expertise in the human experience of a data-driven life. Using and developing innovative design research methods, his work undertakes diverse, qualitative and often speculative engagements with participants to investigate emerging relationships with technology – particularly data-driven tools and financialn technologies. Chris gained his PhD in Computer Science at Open Lab, Newcastle University in 2018, and in 2019 was a recipient of a SIGCHI Outstanding Dissertation Award.
Dr Mike Duggan is a Teaching Fellow in Digital Cultures in the Department of Digital Humanities at Kings College London. He was awarded a PhD in Cultural Geography from Royal Holloway University of London in 2017, which examined everyday digital mapping practices. This project was co-funded by the Ordnance Survey and the EPSRC. He is a member of the Living Maps network, where he is an editor for the ‘navigations’ section and previously curated the seminar series. Mike’s research is broadly interested in the digital and cultural geographies that emerge from the intersections between everyday life and digital technology.
Professor Chris Speed is Chair of Design Informatics at the University of Edinburgh where his research focuses upon the Network Society, Digital Art and Technology, and The Internet of Things. Chris has sustained a critical enquiry into how network technology can engage with the fields of art, design and social experience through a variety of international digital art exhibitions, funded research projects, books journals and conferences. At present Chris is working on funded projects that engage with the social opportunities of crypto-currencies, an internet of toilet roll holders, and a persistent argument that chickens are actually robots. Chris is co-editor of the journal Ubiquity and co-directs the Design Informatics Research Centre that is home to a combination of researchers working across the fields of interaction design, temporal design, anthropology, software engineering and digital architecture, as well as the PhD, MA/MFA and MSc and Advanced MSc programmes.
David Chatting is a designer and technologist who works in software and hardware to explore the impact of emerging technologies in everyday lives. He is currently a PhD student in the Department of Design at Goldsmiths – University of London, a Visiting Researcher at Newcastle University’s Open Lab and has his own design practice. Previously he was a Senior Researcher at BTs Broadband Applications Research Centre. David has a Masters degree in Design Interactions from the Royal College of Art (2012) and a Bachelors degree in Computer Science from the University of Birmingham (2000). He has published papers and filed patents in the fields of HCI, psychology, tangible interfaces, computer vision and computer graphics.
Hannah Redler Hawes (Data as Culture) is an independent curator and codirector of the Data as Culture art programme at the Open Data Institute in London. Hannah specialises in emerging artistic practice within the fields of art and science and technology, with an interest in participatory process. She has previously developed projects for museums, galleries, corporate contexts, digital space and the public realm including the Institute of Physics, Tate Modern, The Lowry, Natural History Museum, FACT Liverpool, the Digital Catapult and Science Gallery London, and has provided specialist consultancy services to the Wellcome Collection, Discover South Kensington and the Horniman Museum. Hannah enjoys projects that redraw boundaries between different disciplines. Current research is around addiction, open data, networked culture and new forms of programming beyond the gallery.
Tinderbox Collective : From grass-roots youth work to award-winning music productions, Tinderbox is building a vibrant and eclectic community of young musicians and artists in Scotland. We have a number of programmes that cross over with each other and come together wherever possible. They are open to children and young people aged 10 – 25, from complete beginners to young professionals and all levels in between. Tinderbox Lab is our digital arts programme and shared studio maker-space in Edinburgh that brings together artists across disciplines with an interest in digital media and interactive technologies. It is a new programme that started development in 2019, leading to projects and events such as Room to Play, a 10-week course for emerging artists led by Yann Seznec; various guest artist talks & workshops; digital arts exhibitions at the V&A Dundee & Edinburgh Festival of Sound; digital/electronics workshops design/development for children & young people; and research included as part of Electronic Visualisation and the Arts (EVA) London 2019 conference.
Jack Nissan (Tinderbox) is the founder and director of the Tinderbox Collective. In 2012/13, Jack took part in a fellowship programmed called International Creative Entrepreneurs and spent several months working with community activists and social enterprises in China, primarily with families and communities on the outskirts of Beijing with an organisation called Hua Dan. Following this, he set up a number of international exchanges and cross-cultural productions that formed the basis for Tinderbox’s Journey of a Thousand Wings programme, a project bringing together artists and community projects from different countries. He is also a co-director and founding member of Hidden Door, a volunteer-run multi-arts festival, and has won a number of awards for his work across creative and social enterprise sectors. He has been invited to take part in several steering committees and advisory roles, including for Creative Scotland’s new cross-cutting theme on Creative Learning and Artworks Scotland’s peer-networks for artists working in participatory settings. Previously, Jack worked as a researcher in psychology and ageing for the multidisciplinary MRC Centre for Cognitive Ageing and Cognitive Epidemiology, specialising in areas of neuropsychology and memory.
Luci Holland (Tinderbox) is a Scottish (Edinburgh-based) composer, sound artist and radio presenter who composes and produces music and audiovisual art for film, games and concert. As a games music composer Luci wrote the original dynamic/responsive music for Blazing Griffin‘s 2018 release Murderous Pursuits, and has composed and arranged for numerous video game music collaborations, such as orchestrating and producing an arrangement of Jessica Curry‘s Disappearing with label Materia Collective’s bespoke cover album Pattern: An Homage to Everybody’s Gone to the Rapture. Currently she has also been composing custom game music tracks for Skyrim mod Lordbound and a variety of other film and game music projects. Luci also builds and designs interactive sonic art installations for festivals and venues (Refraction (Cryptic), CITADEL (Hidden Door)); and in 2019 Luci joined new classical music station Scala Radio to present The Console, a weekly one-hour show dedicated to celebrating great music in games. Luci also works as a musical director and composer with the youth music charity Tinderbox Project on their Orchestra & Digital Arts programmes; classical music organisation Absolute Classics; and occasionally coordinates musical experiments and productions with her music-for-media band Mantra Sound.
Good luck to all who submit an expression of interest and good luck to Dr. Thornton (I see from her bio that she’s been shortlisted for the 2020 Lumen Prize).
Fascinating, yes? More than one person has noticed that the ‘new’ lamb is “disturbingly human-like.” First, here’s more about this masterpiece and the technology used to restore it (from a July 29, 2020 University of Antwerp (Belgium) press release (Note: I do not have all of the figures (images) described in this press release embedded here),
Two non-invasive chemical imaging modalities were employed to help understand the changes made over time to the Lamb of God, the focal point of the Ghent Altarpiece (1432) by Hubert and Jan Van Eyck. Two major results were obtained: a prediction of the facial features of the Lamb of God that had been hidden beneath non-original overpaint dating from the 16th century (and later), and evidence for a smaller earlier version of the Lamb’s body with a more naturalistic build. These non-invasive imaging methods, combined with analysis of paint cross-sections and magnified examination of the paint surface, provide objective chemical evidence to understand the extent of overpaints and the state of preservation of the original Eyckian paint underneath.
The Ghent Altarpiece is one of the founding masterpieces of Western European painting. The central panel, The Adoration of the Lamb, represents the sacrifice of Christ with a depiction of the Lamb of God standing on an altar, blood pouring into a chalice. During conservation treatment and technical analysis in the 1950s, conservators recognized the presence of overpaint on the Lamb and the surrounding area. But based on the evidence available at that time, the decision was made to remove only the overpaint obscuring the background immediately surrounding the head. As a result, the ears of the Eyckian Lamb were uncovered, leading to the surprising effect of a head with four ears (Figure 1).
During the recent conservation treatment of the central panel, chemical images collected before 16th century overpaint was removed revealed facial features that predicted aspects of the Eyckian Lamb, at that time still hidden below the overpaint. For example, the smaller, v-shaped nostrils of the Eyckian Lamb are situated higher than the 16th century nose, as revealed in the map for mercury, an element associated with the red pigment vermilion (Figure 2, red arrow). A pair of eyes that look forward, slightly lower than the 16th century eyes, can be seen in a false-color hyperspectral infrared reflectance image (Figure 2, right). This image also shows dark preparatory underdrawing lines that define pursed lips, and in conjunction with the presence of mercury in this area, suggest the Eyckian lips were more prominent. In addition, the higher, 16th century ears were painted over the gilded rays of the halo (Figure 2, yellow rays). Gilding is typically the artist’s final touch when working on a painting, which supports the conclusion that the lower set of ears is the Eyckian original. Collectively, these facial features indicate that, compared to the 16th century restorer’s overpainted face, the Eyckian Lamb has a smaller face with a distinctive expression.
Figure 2: Left: Colorized composite elemental map showing the distribution of gold (in yellow), mercury (in red), and lead (in white). The red arrow indicates the position of the Eyckian Lamb’s nostrils. (University of Antwerp). Right: Composite false-color infrared reflectance image (blue – 1000 nm, green – 1350 nm, red – 1650 nm) shows underdrawn lines indicating the position of facial features of the Eyckian Lamb, including forward-gazing eyes, the division between the lips, and the jawline. (National Gallery of Art, Washington). The dotted lines indicate the outline of the head before removal of 16th century overpaint.
The new imaging also revealed previously unrecognized revisions to the size and shape of the Lamb’s body: a more naturalistically shaped Lamb, with slightly sagging back, more rounded hindquarters and a smaller tail. The artist’s underdrawing lines used to lay out the design of the smaller shape can be seen in the false-color hyperspectral infrared reflectance image (Figure 3, lower left, white arrows). Mathematical processing of the reflectance dataset to emphasize a spectral feature associated with the pigment lead white resulted in a clearer image of the smaller Lamb (Figure 3, lower right). Differences between the paint handling of the fleece in the initial small Lamb and the revised area of the larger Lamb also were found upon reexamination of the x-radiograph and the paint surface under the microscope.
During the conservation treatment completed in 2019, decisions were informed by well-established conservation methods (high-resolution color photography, X-radiography, infrared imaging, paint sample analysis) as well as the new chemical imaging. In this way, the conservation treatment uncovered the smaller face of the Eyckian Lamb, with forward-facing eyes that meet the viewer’s gaze. Only overpaints that could be identified as being later additions dating from the 16th century onward were carefully and safely removed. The body of the Lamb, however, has not changed. The material evidence indicates that the lead white paint layer used to define the larger squared-off hindquarters was applied prior to the 16th century restoration, but because analysis at the present time cannot definitively establish whether this was a change by the original artist(s) or a very early restoration or alteration by another artist, the enlarged contour of the Lamb was left untouched.
Chemical imaging technologies can be used to build confidence about the state of preservation of original paint and help guide the decision to remove overpaint. Combined with the conservators’ thorough optical examination, informed by years of experience and insights derived from paint cross-sections, chemical imaging methods will no doubt be central to ongoing interdisciplinary research, helping to resolve long-standing art-historical issues on the Ghent Altarpiece as well as other works of art. These findings were obtained by researchers from the University of Antwerp using macroscale X-ray fluorescence imaging and researchers at the National Gallery of Art, Washington using infrared reflectance imaging spectroscopy, interpreted in conjunction with the observations of the scientists and the conservation team from The Royal Institute for Cultural Heritage (KIK-IRPA), Brussels.
Restorers found that the central panel of the artwork, known as the Adoration of the Mystic Lamb, had been painted over in the 16th Century.
Another artist had altered the Lamb of God, a symbol for Jesus depicted at the centre of the panel.
Now conservationists have stripped away the overpaint, revealing the lamb’s “intense gaze” and “large frontal eyes”.
…
Hélène Dubois, the head of the restoration project, told the Art Newspaper the original lamb had a more “intense interaction with the onlookers”.
She said the lamb’s “cartoonish” depiction, which departs from the painting’s naturalistic style, required more research.
…
The lamb has been described as having an “alarmingly humanoid face” with “penetrating, close-set eyes, full pink lips and flared nostrils” by the Smithsonian Magazine.
These features are “eye-catching, if not alarmingly anthropomorphic”, said the magazine, the official journal of the Smithsonian Institution.
There was also disbelief on social media, where the lamb was called “disturbing” by some and compared to an “alien creature”. Some said they felt it would have been better to not restore the lamb’s original face.
…
The painter of the panel, Jan Van Eyck, is considered to be one of the most technical and talented artists of his generation. However, it is widely believed that The Ghent Altarpiece was started by his brother, Hubert Van Eyck.
Taken away by the Nazis during World War Two and Napoleon’s troops in the 1700s, the altarpiece is thought to be one of the most frequently stolen artworks of all time.
Jennifer Ouellette’s July 29, 2020 article for Ars Technica delves further into the technical detail along with some history about this particular 21st Century restoration. The conservators and experts used artificial intelligence (AI) to assist.
Scientists used equipment at the Canadian Light Source (CLS; synchrotron in Saskatoon, Saskatchewan, Canada) in the quest for better glowing dots on your television (maybe computers and telephones, too?) screen. From an August 20, 2020 news item on Nanowerk,
There are many things quantum dots could do, but the most obvious place they could change our lives is to make the colours on our TVs and screens more pristine. Research using the Canadian Light Source (CLS) at the University of Saskatchewan is helping to bring this technology closer to our living rooms.
An August 19, 2020 CLS news release (also received via email) by Victoria Martinez, which originated the news item, explains what quantum dots are and fills in with technical details about this research,
Quantum dots are nanocrystals that glow, a property that scientists have been working with to develop next-generation LEDs. When a quantum dot glows, it creates very pure light in a precise wavelength of red, blue or green. Conventional LEDs, found in our TV screens today, produce white light that is filtered to achieve desired colours, a process that leads to less bright and muddier colours.
Until now, blue-glowing quantum dots, which are crucial for creating a full range of colour, have proved particularly challenging for researchers to develop. However, University of Toronto (U of T) researcher Dr. Yitong Dong and collaborators have made a huge leap in blue quantum dot fluorescence, results they recently published in Nature Nanotechnology.
“The idea is that if you have a blue LED, you have everything. We can always down convert the light from blue to green and red,” says Dong. “Let’s say you have green, then you cannot use this lower-energy light to make blue.”
The team’s breakthrough has led to quantum dots that produce green light at an external quantum efficiency (EQE) of 22% and blue at 12.3%. The theoretical maximum efficiency is not far off at 25%, and this is the first blue perovskite LED reported as achieving an EQE higher than 10%.
The Science
Dong has been working in the field of quantum dots for two years in Dr. Edward Sargent’s research group at the U of T. This astonishing increase in efficiency took time, an unusual production approach, and overcoming several scientific hurdles to achieve.
CLS techniques, particularly GIWAXS [grazing incidence wide-angle X-ray scattering] on the HXMA beamline [hard X-ray micro-analysis (HXMA)], allowed the researchers to verify the structures achieved in their quantum dot films. This validated their results and helped clarify what the structural changes achieve in terms of LED performance.
“The CLS was very helpful. GIWAXS is a fascinating technique,” says Dong.
The first challenge was uniformity, important to ensuring a clear blue colour and to prevent the LED from moving towards producing green light.
“We used a special synthetic approach to achieve a very uniform assembly, so every single particle has the same size and shape. The overall film is nearly perfect and maintains the blue emission conditions all the way through,” says Dong.
Next, the team needed to tackle the charge injection needed to excite the dots into luminescence. Since the crystals are not very stable, they need stabilizing molecules to act as scaffolding and support them. These are typically long molecule chains, with up to 18 carbon-non-conductive molecules at the surface, making it hard to get the energy to produce light.
“We used a special surface structure to stabilize the quantum dot. Compared to the films made with long chain molecules capped quantum dots, our film has 100 times higher conductivity, sometimes even 1000 times higher.”
This remarkable performance is a key benchmark in bringing these nanocrystal LEDs to market. However, stability remains an issue and quantum dot LEDs suffer from short lifetimes. Dong is excited about the potential for the field and adds, “I like photons, these are interesting materials, and, well, these glowing crystals are just beautiful.”
Here’s a link to and a citation for the paper,
Bipolar-shell resurfacing for blue LEDs based on strongly confined perovskite quantum dots by Yitong Dong, Ya-Kun Wang, Fanglong Yuan, Andrew Johnston, Yuan Liu, Dongxin Ma, Min-Jae Choi, Bin Chen, Mahshid Chekini, Se-Woong Baek, Laxmi Kishore Sagar, James Fan, Yi Hou, Mingjian Wu, Seungjin Lee, Bin Sun, Sjoerd Hoogland, Rafael Quintero-Bermudez, Hinako Ebe, Petar Todorovic, Filip Dinic, Peicheng Li, Hao Ting Kung, Makhsud I. Saidaminov, Eugenia Kumacheva, Erdmann Spiecker, Liang-Sheng Liao, Oleksandr Voznyy, Zheng-Hong Lu, Edward H. Sargent. Nature Nanotechnology volume 15, pages668–674(2020) DOI: https://doi.org/10.1038/s41565-020-0714-5 Published: 06 July 2020 Issue Date: August 2020
This paper is behind a paywall.
If you search “Edward Sargent,” he’s the last author listed in the citation, here on this blog, you will find a number of postings that feature work from his laboratory at the University of Toronto.
I was half-expecting to read about some sort of fancy carbon nanotubes—I was wrong. From an August 12, 2020 news item on ScienceDaily where the researchers keep the mystery going for a while,
A new mechanism of blood redistribution that is essential for the proper functioning of the adult retina has just been discovered in vivo by researchers at the University of Montreal Hospital Research Centre (CRCHUM).
“For the first time, we have identified a communication structure between cells that is required to coordinate blood supply in the living retina,” said Dr. Adriana Di Polo, a neuroscience professor at Université de Montréal and holder of a Canada Research Chair in glaucoma and age-related neurodegeneration, who supervised the study.
…
“We already knew that activated retinal areas receive more blood than non-activated ones,” she said, “but until now no one understood how this essential blood delivery was finely regulated.”
The study was conducted on mice by two members of Di Polo’s lab: Dr. Luis Alarcon-Martinez, a postdoctoral fellow, and Deborah Villafranca-Baughman, a PhD student. Both are the first co-authors of this study.
In living animals, as in humans, the retina uses the oxygen and nutrients contained in the blood to fully function. This vital exchange takes place through capillaries, the thinnest blood vessels in all organs of the body. When the blood supply is dramatically reduced or cut off — such as in ischemia or stroke — the retina does not receive the oxygen it needs. In this condition, the cells begin to die and the retina stops working as it should.
Wrapped around the capillaries are pericytes, cells that have the ability to control the amount of blood passing through a single capillary simply by squeezing and releasing it.
“Using a microscopy technique to visualize vascular changes in living mice, we showed that pericytes project very thin tubes, called inter-pericyte tunnelling nanotubes, [emphasis mine] to communicate with other pericytes located in distant capillaries,” said Alarcon-Martinez. “Through these nanotubes, the pericytes can talk to each other to deliver blood where it is most needed.”
Another important feature, added Villafranca-Baughman, is that “the capillaries lose their ability to shuttle blood where it is required when the tunnelling nanotubes are damaged–after an ischemic stroke, for example. The lack of blood supply that follows has a detrimental effect on neurons and the overall tissue function.”
The team’s findings suggest that microvascular deficits observed in neurodegenerative diseases like strokes, glaucoma, and Alzheimer’s disease might result from the loss of tunnelling nanotubes and impaired blood distribution. Strategies that protect these nanostructures should then be beneficial, but remain to be demonstrated.
Here’s a link to and a citation for the paper
Interpericyte tunnelling nanotubes regulate neurovascular coupling by Luis Alarcon-Martinez, Deborah Villafranca-Baughman, Heberto Quintero, J. Benjamin Kacerovsky, Florence Dotigny, Keith K. Murai, Alexandre Prat, Pierre Drapeau & Adriana Di Polo. Nature (2020) Published: 12 August 2020 DOI: https://doi.org/10.1038/s41586-020-2589-x
A July 29, 2020 news item on ScienceDaily announces a study showing that quantum loop cosmology can account for some large-scale mysteries,
While [1] Einstein’s theory of general relativity can explain a large array of fascinating astrophysical and cosmological phenomena, some aspects of the properties of the universe at the largest-scales remain a mystery. A new study using loop quantum cosmology — a theory that uses quantum mechanics to extend gravitational physics beyond Einstein’s theory of general relativity — accounts for two major mysteries. While the differences in the theories occur at the tiniest of scales — much smaller than even a proton — they have consequences at the largest of accessible scales in the universe. The study, which appears online July 29 [2020] in the journal Physical Review Letters, also provides new predictions about the universe that future satellite missions could test.
While [2] a zoomed-out picture of the universe looks fairly uniform, it does have a large-scale structure, for example because galaxies and dark matter are not uniformly distributed throughout the universe. The origin of this structure has been traced back to the tiny inhomogeneities observed in the Cosmic Microwave Background (CMB)–radiation that was emitted when the universe was 380 thousand years young that we can still see today. But the CMB itself has three puzzling features that are considered anomalies because they are difficult to explain using known physics.
“While [3] seeing one of these anomalies may not be that statistically remarkable, seeing two or more together suggests we live in an exceptional universe,” said Donghui Jeong, associate professor of astronomy and astrophysics at Penn State and an author of the paper. “A recent study in the journal Nature Astronomy proposed an explanation for one of these anomalies that raised so many additional concerns, they flagged a ‘possible crisis in cosmology‘ [emphasis mine].’ Using quantum loop cosmology, however, we have resolved two of these anomalies naturally, avoiding that potential crisis.”
Research over the last three decades has greatly improved our understanding of the early universe, including how the inhomogeneities in the CMB were produced in the first place. These inhomogeneities are a result of inevitable quantum fluctuations in the early universe. During a highly accelerated phase of expansion at very early times–known as inflation–these primordial, miniscule fluctuations were stretched under gravity’s influence and seeded the observed inhomogeneities in the CMB.
“To understand how primordial seeds arose, we need a closer look at the early universe, where Einstein’s theory of general relativity breaks down,” said Abhay Ashtekar, Evan Pugh Professor of Physics, holder of the Eberly Family Chair in Physics, and director of the Penn State Institute for Gravitation and the Cosmos. “The standard inflationary paradigm based on general relativity treats space time as a smooth continuum. Consider a shirt that appears like a two-dimensional surface, but on closer inspection you can see that it is woven by densely packed one-dimensional threads. In this way, the fabric of space time is really woven by quantum threads. In accounting for these threads, loop quantum cosmology allows us to go beyond the continuum described by general relativity where Einstein’s physics breaks down–for example beyond the Big Bang.”
The researchers’ previous investigation into the early universe replaced the idea of a Big Bang singularity, where the universe emerged from nothing, with the Big Bounce, where the current expanding universe emerged from a super-compressed mass that was created when the universe contracted in its preceding phase. They found that all of the large-scale structures of the universe accounted for by general relativity are equally explained by inflation after this Big Bounce using equations of loop quantum cosmology.
In the new study, the researchers determined that inflation under loop quantum cosmology also resolves two of the major anomalies that appear under general relativity.
“The primordial fluctuations we are talking about occur at the incredibly small Planck scale,” said Brajesh Gupt, a postdoctoral researcher at Penn State at the time of the research and currently at the Texas Advanced Computing Center of the University of Texas at Austin. “A Planck length is about 20 orders of magnitude smaller than the radius of a proton. But corrections to inflation at this unimaginably small scale simultaneously explain two of the anomalies at the largest scales in the universe, in a cosmic tango of the very small and the very large.”
The researchers also produced new predictions about a fundamental cosmological parameter and primordial gravitational waves that could be tested during future satellite missions, including LiteBird and Cosmic Origins Explorer, which will continue improve our understanding of the early universe.
That’s a lot of ‘while’. I’ve done this sort of thing, too, and whenever I come across it later; it’s painful.
A new class of nanosensor developed in Brazil could more accurately identify dengue and Zika infections, a task that is complicated by their genetic similarities and which can result in misdiagnosis.
The technique uses gold nanoparticles and can “observe” viruses at the atomic level, according to a study published in Scientific Reports (“Nanosensors based on LSPR are able to serologically differentiate dengue from Zika infections”).
Belonging to the Flavivirus genus in the Flaviviridae family, Zika and dengue viruses share more than 50 per cent similarity in their amino acid sequence. Both viruses are spread by mosquitos and can have long-term side effects. The Flaviviridae virus family was named after the yellow fever virus and comes from the Latin word for golden, or yellow, in colour.
“Diagnosing [dengue virus] infections is a high priority in countries affected by annual epidemics of dengue fever. The correct diagnostic is essential for patient managing and prognostic as there are no specific antiviral drugs to treat the infection,” the authors say.
More than 1.8 million people are suspected to have been infected with dengue so far this year in the Americas, with 4000 severe cases and almost 700 deaths, the Pan American Health Organization says. The annual global average is estimated to be between 100 million and 400 million dengue infections, according to the World Health Organization.
Flávio Fonseca, study co-author and researcher at the Federal University of Minas Gerais, tells SciDev.Net it is almost impossible to differentiate between dengue and Zika viruses.
“A serologic test that detects antibodies against dengue also captures Zika-generated antibodies. We call it cross-reactivity,” he says.
…
Meghie Rodrigues’ July 29, 2020 article for SciDev.net, which originated the news item, delves further into the work,
Co-author and virologist, Maurício Nogueira, tells SciDev.Net that avoiding cross-reactivity is crucial because “dengue is a disease that kills — and can do so quickly if the right diagnosis is not made. As for Zika, it offers risks for foetuses to develop microcephaly, and we can’t let pregnant women spend seven or eight months wondering whether they have the virus or not.”
There is also no specific antiviral treatment for Zika and the search for a vaccine is ongoing.
Virus differentiation is important to accurately measure the real impact of both diseases on public health. The most widely used blood test, the enzyme-linked immunosorbent assay (ELISA), is limited in its ability to tell the difference between the viruses, the authors say.
As dengue has four variations, known as serotypes, the team created four different nanoparticles and covered each of them with a different dengue protein. They applied ELISA serum and a blood sample. The researchers found that sample antibodies bound with the viruses’ proteins, changing the pattern of electrons on the gold nanoparticle surface.
…
Should you check out Rodrigues’ entire article, you might want to take some time to explore SciDev.net to find science news from countries that don’t often get the coverage they should.
Here’s a link to and a citation for the researchers’ paper,
Nanosensors based on LSPR are able to serologically differentiate dengue from Zika infections by Alice F. Versiani, Estefânia M. N. Martins, Lidia M. Andrade, Laura Cox, Glauco C. Pereira, Edel F. Barbosa-Stancioli, Mauricio L. Nogueira, Luiz O. Ladeira & Flávio G. da Fonseca. Scientific Reports volume 10, Article number: 11302 (2020) DOI: https://doi.org/10.1038/s41598-020-68357-9 Published: 09 July 2020
I always enjoy the unexpected in a story and this one has to do with plantains and luxury cars, from a July 29, 2020 news item on phys.org (Note: A link has been removed),
A luxury automobile is not really a place to look for something like sisal, hemp, or wood. Yet automakers have been using natural fibers for decades. Some high-end sedans and coupes use these in composite materials for interior door panels, for engine, interior and noise insulation, and internal engine covers, among other uses.
Unlike steel or aluminum, natural fiber composites do not rust or corrode. They can also be durable and easily molded. The biggest advantages of fiber reinforced polymer composites for cars are light weight, good crash properties, and noise- and vibration-reducing characteristics. But making more parts of a vehicle from renewable sources is a challenge. Natural fiber polymer composites can crack, break and bend. The reasons include low tensile, flexural and impact strength in the composite material.
Researchers from the University of Johannesburg [South Africe] have now demonstrated that plantain, a starchy type of banana, is a promising source for an emerging type of composite material for the automotive industry. The natural plantain fibers are combined with carbon nanotubes and epoxy resin to form a natural fiber-reinforced polymer hybrid nanocomposite material. Plantain is a year-round staple food crop in tropical regions of Africa, Asia and South America. Many types of plantain are eaten cooked.
Plantain is a year-round staple food crop in tropical regions of Africa, Asia and South America. Many types of plantain are eaten cooked.
The researchers moulded a composite material from epoxy resin, treated plantain fibers and carbon nanotubes. The optimum amount of nanotubes was 1% by weight of the plantain-epoxy resin combined.
The resulting plantain nanocomposite was much stronger and stiffer than epoxy resin on its own.
The composite had 31% more tensile and 34% more flexural strength than the epoxy resin alone. The nanocomposite also had 52% higher tensile modulus and 29% higher flexural modulus than the epoxy resin alone.
“The hybridization of plantain with multi-walled carbon nanotubes increases the mechanical and thermal strength of the composite. These increases make the hybrid composite a competitive and alternative material for certain car parts,” says Prof Tien-Chien Jen.
Prof Jen is the lead researcher in the study and the Head of the Department of Mechanical Engineering Science at the University of Johannesburg.
Natural fibres vs metals
Producing car parts from renewable sources have several benefits, says Dr Patrick Ehi Imoisili. Dr Imoisili is a postdoctoral researcher in the Department of Mechanical Engineering Science at the University of Johannesburg.
“There is a trend of using natural fibre in vehicles. The reason is that natural fibres composites are renewable, low cost and low density. They have high specific strength and stiffness. The manufacturing processes are relatively safe,” says Imoisili.
“Using car parts made from these composites, can reduce the mass of a vehicle. That can result in better fuel-efficiency and safety. These components will not rust or corrode like metals. Also, they can be stiff, durable and easily molded,” he adds.
However, some natural fibre reinforced polymer composites currently have disadvantages such as water absorption, low impact strength and low heat resistance. Car owners can notice effects such as cracking, bending or warping of a car part, says Imoisili.
Standardised tests
The researchers subjected the plantain nanocomposite to a series of standardised industrial tests. These included ASTM Test Methods D638 and D790; impact testing according to the ASTM A-370 standard; and ASTM D-2240.
The tests showed that a composite with 1% nanotubes had the best strength and stiffness, compared to epoxy resin alone.
The plantain nanocomposite also showed marked improvement in micro hardness, impact strength and thermal conductivity compared to epoxy resin alone.
Moulding a nanocomposite from natural fibres
The researchers compression-moulded a ‘stress test object’. They used 1 part inedible plantain fibres, 4 parts epoxy resin and multi-walled carbon nanotubes. The epoxy resin and nanotubes came from commercial suppliers. The epoxy was similar to resins that auto manufacturers use in certain car parts.
The plantain fibres came from the ‘trunks’ or pseudo-stems, of plantain plants in the south-western region of Nigeria. The pseudo-stems consist of tightly-overlapping leaves.
The researchers treated the plantain fibers with several processes. The first process is an ancient method to separate plant fibres from stems, called water-retting.
In the second process, the fibres were soaked in a 3% caustic soda solution for 4 hours. After drying, the fibres were treated with high-frequency microwave radiation of 2.45GHz at 550W for 2 minutes.
The caustic soda and microwave treatments improved the bonding between the plantain fibers and the epoxy resin in the nanocomposite.
Next, the researchers dispersed the nanotubes in ethanol to prevent ‘bunching’ of the tubes in the composite. After that, the plantain fibres, nanotubes and epoxy resin were combined inside a mold. The mold was then compressed with a load for 24 hours at room temperature.
Food crop vs industrial raw material
Plantain is grown in tropical regions worldwide. This includes Mexico, Florida and Texas in North America; Brazil, Honduras, Guatemala in South and Central America; India, China, and Southeast Asia.
In West and Central Africa, farmers grow plantain in Cameroon, Ghana, Uganda, Rwanda, Nigeria, Cote d’Ivoire and Benin.
Using biomass from major staple food crops can create problems in food security for people with low incomes. In addition, the automobile industry will need access to reliable sources of natural fibres to increase use of natural fibre composites.
In the case of plantains, potential tensions between food security and industrial uses for composite materials are low. This is because plantain farmers discard the pseudo-stems as agro-waste after harvest.
The night sky has inspired speculation, discovery, and stories throughout time and from all the peoples of this planet. The information derived from observing the stars and moon has led to voyages on land, on sea, through space, and into the recesses of minds and hearts.
Currently, an ancient celestial practice, celebration of solstices and equinoxes seems to be gaining popularity and acceptance.
Indigenous Star Knowledge Symposia: A series of local and international gatherings, on the land and online
Organised by Ingenium in collaboration with the Institute of Indigenous Research and Studies at the University of Ottawa, and hosted on traditional Algonquin Anishnaabeg territory, this series of symposia (chosen on the dates of the Fall equinox, Winter solstice, Spring equinoxes and Summer solstice) will combine spiritual ceremony, presentations, activities and dialogue, both online and on the land. The symposia will feature gatherings of Indigenous Knowledge Keepers, Elders, educators and scholars to share and exchange towards reclaiming, preserving, and revitalizing Star Knowledge with Indigenous communities worldwide.
Our original plan was to have a symposium in September 2020, but due to Covid-19 we have reshaped the entire program to spread out the timeline while combining physical and digitally-inclusive experiences. This blended format greatly expands our original intent to offer a space for teaching and learning, while bringing hope and healing through the Indigenous Star Knowledge and our work.
Fall Equinox: Protocols before Knowledge, Seasonal and regional themes
September 21, 2020 (7 p.m. Est Ottawa, Canada); September 22, 2020 (9:00 a.m. Lismore, Australia)
For Indigenous people astronomy and cosmology are intricately intertwined. Star Knowledge, like everything else, is all about relationships and teaches us our place in the universe.
Shawn Wilson is Opaskwayak Cree from Manitoba. He works at Gnibi College of Indigenous Australian Peoples and is also an Adjunct Professor at Østfold University College in Norway. Shawn will discuss how understanding Indigenous Star Knowledge develops a deeper understanding of the very nature of reality. To gain this understanding requires us to develop deeper relationships with Sky Country.
Stuart Barlo is a Yuin man from the south coast of New South Wales, and is Dean of Gnibi College of Indigenous Australian Peoples. Stuart will talk about the journey of being able to speak about Sky Country. The journey requires learning how to prepare yourself and create a safe space to develop relationship with Sky Country.
Panellists:
Wilfred Buck, Manitoba First Nations Education Resource Center
*Postponed and adapted due to COVID* Coinciding with a ceremony at Kitigan Zibi, Quebec to launch the Algonquin Star Knowledge Project. Offering of Tobacco and Prayer on the land with Peter Decontie, Wilfred Buck, Anita Tenasco and members of the Algonquin community.
It gets a little confusing but I gather that the symposia are linked to a larger initiative, which has its roots in a 2017 exhibition (co-curated by Wilfred Buck and Annette S. Lee) at Canada’s Science and Technology Museum. ***Video link removed Dec. 8, 2020***
One Sky, Many Worlds; Indigenous Voices in Astronomy
I gather various parties have been working together to produce not only the symposia but a new traveling exhibition “One Sky, Many Worlds; Indigenous Voices in Astronomy.”
I was going to call this item a brochure but its URL includes the words “exhibition book.” Regardless, it’s where you can get more details about “One Sky, Many Worlds” and how it was developed. Do take a look at it, there are many beautiful images, including Margaret Nazon’s beadworks of art, one of which I featured at the beginning of this posting. There are many works of Indigenous astronomy-based art featured in the ‘brochure’. For some reason, the text is white against a dark background. Perhaps they were trying to evoke the stars against the night sky? Unfortunately, it makes the text less readable, which would seem to defeat the purpose of bothering with text in the first place. Also, it can lead to having to deal with cranky writers who worry their work won’t be read. (Just a thought)
New Partnership with Ingenium: Canada’s Museums of Science and Innovation
Nomad are proud to be selected as Ingenium’s partner to develop and tour an exciting new international travelling exhibition ‘One Sky, Many Worlds: Indigenous Voices in Astronomy’. This ground-breaking new exhibition will illustrate in a spectacular immersive display environment how for tens of thousands of years Indigenous people have been building a relationship with the night sky.
The exhibition will showcase artifacts representing global collections, whilst numerous mechanical and digital interactive elements will enhance visitors’ learning and understanding in an engaging, active way that reminds every human being that we come from the stars.
Led by Indigenous knowledge keepers, One Sky, Many Worlds: Indigenous Voices in Astronomy, is an 8,000 sq ft traveling exhibition that explores Indigenous Star Knowledge from locations around the globe. Featuring content from North America, South Africa, Australia, Mexico, South America, Asia, Hawaii, and New Zealand, One Sky asks questions, and shares experiences that will resonate with all people who look up and wonder about the night sky. The exhibition is available for tour internationally from summer 2021. [emphasis mine]
Nomad Exhibitions are innovative creators of international museum quality touring exhibitions.
Nomad offers a unique portfolio of high quality touring exhibitions combining curatorial excellence, state of the art design and seamless turnkey production. Our exhibitions are designed to facilitate exceptional international collaborations between cultural institutions on major exhibition projects, providing museum professionals with a tailored exhibition hosting experience.
Nomad Exhibitions is located in Edinburgh, Scotland, UK.
Travelling exhibition and an oddity
Should you be interested in booking the exhibition. you can go to Nomad’s “One Sky, Many Worlds” exhibition web page, where I was intrigued to find this (I’ve emphasized the portion in question),
One Sky, Many Worlds is a collaborative exhibition led by Indigenous Knowledge Keepers, both young and old, from around the world. The exhibition explores the enduring relationship and connection that Indigenous people have with the night sky and how it has provided –and continues to provide – a practical, cultural, and spiritual guidebook for life.
One Sky, Many Worlds is, at its core, experiential. A strong emphasis on exceptional objects and intriguing ideas will be carefully complemented by a variety of interactive elements and spaces designed to engage visitors in active participation.
Each exhibition section will feature an immersive experience, audio visual content, and a selection of digital interactives, many of which will be touch free. For example, visitors will be transported from the Mississippi through the Milky Way on to the Pacific Ocean via a beautiful, [emphasis mine] immersive projection experience; visitors will be engaged in stories as told by Indigenous Elders in their own language; and visitors will also have the opportunity to participate in dynamic activities that show the links between earth and sky and allow them to see the constellations in a whole new way.
The example is a bit puzzling since ‘the Mississippi’ could mean either the ‘state of Mississippi’ or the ‘Mississippi River’ neither of which have any connection to the Pacific Ocean. But, perhaps astronomy buffs would understand this better than I do.
As to why either the state or the river would be the starting point for transportation via the Milky Way, that is a mystery. Especially after taking a look at Sharmila Kuthunu’s July 1, 2019 article, “How to See the Milky Way in 5 Easy Steps” for Space Tourism Guide,
Home to 400 billion stars, our galaxy is a barred spiral that spans 100,000 light years in diameter. While that might seem huge, the Milky Way is only clearly visible from April through October in the northern hemisphere and is hidden below the horizon for half the year.
It rises in the southeast, crosses over the horizon and sets in the southwest. Since it rises and sets in the southern hemisphere, those living in the south can see it directly overhead. The largest view of the galaxy can be seen from southern hemisphere destinations like South Africa, Chile, and Australia [emphasis mine].
…
Given that there was a global collaboration and the Milky Way is visible from any number of starting points, the choice of whichever Mississippi the writer intended to highlight seems odd. (See geography of Mississippi River; geography of Mississippi state [be sure to follow the red arrow to the green rectangle bordering the Gulf of Mexico])
Most likely, it’s my ignorance showing.
Plus, when I saw Nomad was offering an example, I was hoping there’d be a description or a story representing Indigenous astronomy. If you look at the brochure/exhibition book you’ll see they had a broad range of Indigenous societies represented on the team. The nomad description seems like a lost opportunity.
In sum
Regardless of my nitpicking, both the symposia and the travelling exhibition are exciting and I hope they get the attention they deserve.
If you’re as ignorant about astronomy as I am, you might find this piece about the Milky Way on the US National Aeronautics and Space Administration (NASA) website helpful.
In trying to find a more comprehensive history of practices revolving around solstices and exquinoxes, I found this August 2017 article about the Summer Solstice on the History Channel website.
Researchers at Stanford University (California, US) believe they have a solution for a problem with neuroprosthetics (Note: I have included brief comments about neuroprosthetics and possible ethical issues at the end of this posting) according an August 5, 2020 news item on ScienceDaily,
The current generation of neural implants record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But, so far, when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the implants generated too much heat to be safe for the patient. A new study suggests how to solve his problem — and thus cut the wires.
Stanford researchers have been working for years to advance a technology that could one day help people with paralysis regain use of their limbs, and enable amputees to use their thoughts to control prostheses and interact with computers.
The team has been focusing on improving a brain-computer interface, a device implanted beneath the skull on the surface of a patient’s brain. This implant connects the human nervous system to an electronic device that might, for instance, help restore some motor control to a person with a spinal cord injury, or someone with a neurological condition like amyotrophic lateral sclerosis, also called Lou Gehrig’s disease.
The current generation of these devices record enormous amounts of neural activity, then transmit these brain signals through wires to a computer. But when researchers have tried to create wireless brain-computer interfaces to do this, it took so much power to transmit the data that the devices would generate too much heat to be safe for the patient.
Now, a team led by electrical engineers and neuroscientists Krishna Shenoy, PhD, and Boris Murmann, PhD, and neurosurgeon and neuroscientist Jaimie Henderson, MD, have shown how it would be possible to create a wireless device, capable of gathering and transmitting accurate neural signals, but using a tenth of the power required by current wire-enabled systems. These wireless devices would look more natural than the wired models and give patients freer range of motion.
Graduate student Nir Even-Chen and postdoctoral fellow Dante Muratore, PhD, describe the team’s approach in a Nature Biomedical Engineering paper.
The team’s neuroscientists identified the specific neural signals needed to control a prosthetic device, such as a robotic arm or a computer cursor. The team’s electrical engineers then designed the circuitry that would enable a future, wireless brain-computer interface to process and transmit these these carefully identified and isolated signals, using less power and thus making it safe to implant the device on the surface of the brain.
To test their idea, the researchers collected neuronal data from three nonhuman primates and one human participant in a (BrainGate) clinical trial.
As the subjects performed movement tasks, such as positioning a cursor on a computer screen, the researchers took measurements. The findings validated their hypothesis that a wireless interface could accurately control an individual’s motion by recording a subset of action-specific brain signals, rather than acting like the wired device and collecting brain signals in bulk.
The next step will be to build an implant based on this new approach and proceed through a series of tests toward the ultimate goal.
As I found out while investigating, ethical issues in this area abound. My first thought was to look at how someone with a focus on ability studies might view the complexities.
My ‘go to’ resource for human enhancement and ethical issues is Gregor Wolbring, an associate professor at the University of Calgary (Alberta, Canada). his profile lists these areas of interest: ability studies, disability studies, governance of emerging and existing sciences and technologies (e.g. neuromorphic engineering, genetics, synthetic biology, robotics, artificial intelligence, automatization, brain machine interfaces, sensors) and more.
I can’t find anything more recent on this particular topic but I did find an August 10, 2017 essay for The Conversation where he comments on technology and human enhancement ethical issues where the technology is gene-editing. Regardless, he makes points that are applicable to brain-computer interfaces (human enhancement), Note: Links have been removed),
…
Ability expectations have been and still are used to disable, or disempower, many people, not only people seen as impaired. They’ve been used to disable or marginalize women (men making the argument that rationality is an important ability and women don’t have it). They also have been used to disable and disempower certain ethnic groups (one ethnic group argues they’re smarter than another ethnic group) and others.
…
A recent Pew Research survey on human enhancement revealed that an increase in the ability to be productive at work was seen as a positive. What does such ability expectation mean for the “us” in an era of scientific advancements in gene-editing, human enhancement and robotics?
Which abilities are seen as more important than others?
The ability expectations among “us” will determine how gene-editing and other scientific advances will be used.
And so how we govern ability expectations, and who influences that governance, will shape the future. Therefore, it’s essential that ability governance and ability literacy play a major role in shaping all advancements in science and technology.
One of the reasons I find Gregor’s commentary so valuable is that he writes lucidly about ability and disability as concepts and poses what can be provocative questions about expectations and what it is to be truly abled or disabled. You can find more of his writing here on his eponymous (more or less) blog.
Ethics of clinical trials for testing brain implants
In 2003, neurologist Helen Mayberg of Emory University in Atlanta began to test a bold, experimental treatment for people with severe depression, which involved implanting metal electrodes deep in the brain in a region called area 25 [emphases mine]. The initial data were promising; eventually, they convinced a device company, St. Jude Medical in Saint Paul, to sponsor a 200-person clinical trial dubbed BROADEN.
This month [October 2017], however, Lancet Psychiatry reported the first published data on the trial’s failure. The study stopped recruiting participants in 2012, after a 6-month study in 90 people failed to show statistically significant improvements between those receiving active stimulation and a control group, in which the device was implanted but switched off.
…
… a tricky dilemma for companies and research teams involved in deep brain stimulation (DBS) research: If trial participants want to keep their implants [emphases mine], who will take responsibility—and pay—for their ongoing care? And participants in last week’s meeting said it underscores the need for the growing corps of DBS researchers to think long-term about their planned studies.
…
… participants bear financial responsibility for maintaining the device should they choose to keep it, and for any additional surgeries that might be needed in the future, Mayberg says. “The big issue becomes cost [emphasis mine],” she says. “We transition from having grants and device donations” covering costs, to patients being responsible. And although the participants agreed to those conditions before enrolling in the trial, Mayberg says she considers it a “moral responsibility” to advocate for lower costs for her patients, even it if means “begging for charity payments” from hospitals. And she worries about what will happen to trial participants if she is no longer around to advocate for them. “What happens if I retire, or get hit by a bus?” she asks.
…
There’s another uncomfortable possibility: that the hypothesis was wrong [emphases mine] to begin with. A large body of evidence from many different labs supports the idea that area 25 is “key to successful antidepressant response,” Mayberg says. But “it may be too simple-minded” to think that zapping a single brain node and its connections can effectively treat a disease as complex as depression, Krakauer [John Krakauer, a neuroscientist at Johns Hopkins University in Baltimore, Maryland] says. Figuring that out will likely require more preclinical research in people—a daunting prospect that raises additional ethical dilemmas, Krakauer says. “The hardest thing about being a clinical researcher,” he says, “is knowing when to jump.”
…
Brain-computer interfaces, symbiosis, and ethical issues
This was the most recent and most directly applicable work that I could find. From a July 24, 2019 article by Liam Drew for Nature Outlook: The brain,
“It becomes part of you,” Patient 6 said, describing the technology that enabled her, after 45 years of severe epilepsy, to halt her disabling seizures. Electrodes had been implanted on the surface of her brain that would send a signal to a hand-held device when they detected signs of impending epileptic activity. On hearing a warning from the device, Patient 6 knew to take a dose of medication to halt the coming seizure.
“You grow gradually into it and get used to it, so it then becomes a part of every day,” she told Frederic Gilbert, an ethicist who studies brain–computer interfaces (BCIs) at the University of Tasmania in Hobart, Australia. “It became me,” she said. [emphasis mine]
Gilbert was interviewing six people who had participated in the first clinical trial of a predictive BCI to help understand how living with a computer that monitors brain activity directly affects individuals psychologically1. Patient 6’s experience was extreme: Gilbert describes her relationship with her BCI as a “radical symbiosis”.
Symbiosis is a term, borrowed from ecology, that means an intimate co-existence of two species for mutual advantage. As technologists work towards directly connecting the human brain to computers, it is increasingly being used to describe humans’ potential relationship with artificial intelligence.
Interface technologies are divided into those that ‘read’ the brain to record brain activity and decode its meaning, and those that ‘write’ to the brain to manipulate activity in specific regions and affect their function.
Commercial research is opaque, but scientists at social-media platform Facebook are known to be pursuing brain-reading techniques for use in headsets that would convert users’ brain activity into text. And neurotechnology companies such as Kernel in Los Angeles, California, and Neuralink, founded by Elon Musk in San Francisco, California, predict bidirectional coupling in which computers respond to people’s brain activity and insert information into their neural circuitry. [emphasis mine]
…
Already, it is clear that melding digital technologies with human brains can have provocative effects, not least on people’s agency — their ability to act freely and according to their own choices. Although neuroethicists’ priority is to optimize medical practice, their observations also shape the debate about the development of commercial neurotechnologies.
…
Neuroethicists began to note the complex nature of the therapy’s side effects. “Some effects that might be described as personality changes are more problematic than others,” says Maslen [Hannah Maslen, a neuroethicist at the University of Oxford, UK]. A crucial question is whether the person who is undergoing stimulation can reflect on how they have changed. Gilbert, for instance, describes a DBS patient who started to gamble compulsively, blowing his family’s savings and seeming not to care. He could only understand how problematic his behaviour was when the stimulation was turned off.
Such cases present serious questions about how the technology might affect a person’s ability to give consent to be treated, or for treatment to continue. [emphases mine] If the person who is undergoing DBS is happy to continue, should a concerned family member or doctor be able to overrule them? If someone other than the patient can terminate treatment against the patient’s wishes, it implies that the technology degrades people’s ability to make decisions for themselves. It suggests that if a person thinks in a certain way only when an electrical current alters their brain activity, then those thoughts do not reflect an authentic self.
…
To observe a person with tetraplegia bringing a drink to their mouth using a BCI-controlled robotic arm is spectacular. [emphasis mine] This rapidly advancing technology works by implanting an array of electrodes either on or in a person’s motor cortex — a brain region involved in planning and executing movements. The activity of the brain is recorded while the individual engages in cognitive tasks, such as imagining that they are moving their hand, and these recordings are used to command the robotic limb.
If neuroscientists could unambiguously discern a person’s intentions from the chattering electrical activity that they record in the brain, and then see that it matched the robotic arm’s actions, ethical concerns would be minimized. But this is not the case. The neural correlates of psychological phenomena are inexact and poorly understood, which means that signals from the brain are increasingly being processed by artificial intelligence (AI) software before reaching prostheses.[emphasis mine]
…
But, he [Philipp Kellmeyer, a neurologist and neuroethicist at the University of Freiburg, Germany] says, using AI tools also introduces ethical issues of which regulators have little experience. [emphasis mine] Machine-learning software learns to analyse data by generating algorithms that cannot be predicted and that are difficult, or impossible, to comprehend. This introduces an unknown and perhaps unaccountable process between a person’s thoughts and the technology that is acting on their behalf.
…
Maslen is already helping to shape BCI-device regulation. She is in discussion with the European Commission about regulations it will implement in 2020 that cover non-invasive brain-modulating devices that are sold straight to consumers. [emphases mine; Note: There is a Canadian company selling this type of product, MUSE] Maslen became interested in the safety of these devices, which were covered by only cursory safety regulations. Although such devices are simple, they pass electrical currents through people’s scalps to modulate brain activity. Maslen found reports of them causing burns, headaches and visual disturbances. She also says clinical studies have shown that, although non-invasive electrical stimulation of the brain can enhance certain cognitive abilities, this can come at the cost of deficits in other aspects of cognition.
Regarding my note about MUSE, the company is InteraXon and its product is MUSE.They advertise the product as “Brain Sensing Headbands That Improve Your Meditation Practice.” The company website and the product seem to be one entity, Choose Muse. The company’s product has been used in some serious research papers they can be found here. I did not see any research papers concerning safety issues.
… He [Gilbert] is now preparing a follow-up report on Patient 6. The company that implanted the device in her brain to help free her from seizures went bankrupt. The device had to be removed.
… Patient 6 cried as she told Gilbert about losing the device. … “I lost myself,” she said.
“It was more than a device,” Gilbert says. “The company owned the existence of this new person.”
It’s easy to forget that in all the excitement over technologies ‘making our lives better’ that there can be a dark side or two. Some of the points brought forth in the articles by Wolbring, Underwood, and Drew confirmed my uneasiness as reasonable and gave me some specific examples of how these technologies raise new issues or old issues in new ways.
What I find interesting is that no one is using the term ‘cyborg’, which would seem quite applicable.There is an April 20, 2012 posting here titled ‘My mother is a cyborg‘ where I noted that by at lease one definition people with joint replacements, pacemakers, etc. are considered cyborgs. In short, cyborgs or technology integrated into bodies have been amongst us for quite some time.
Interestingly, no one seems to care much when insects are turned into cyborgs (can’t remember who pointed this out) but it is a popular area of research especially for military applications and search and rescue applications.
I’ve sometimes used the term ‘machine/flesh’ and or ‘augmentation’ as a description of technologies integrated with bodies, human or otherwise. You can find lots on the topic here however I’ve tagged or categorized it.
Amongst other pieces you can find here, there’s the August 8, 2016 posting, ‘Technology, athletics, and the ‘new’ human‘ featuring Oscar Pistorius when he was still best known as the ‘blade runner’ and a remarkably successful paralympic athlete. It’s about his efforts to compete against able-bodied athletes at the London Olympic Games in 2012. It is fascinating to read about technology and elite athletes of any kind as they are often the first to try out ‘enhancements’.
Gregor Wolbring has a number of essays on The Conversation looking at Paralympic athletes and their pursuit of enhancements and how all of this is affecting our notions of abilities and disabilities. By extension, one has to assume that ‘abled’ athletes are also affected with the trickle-down effect on the rest of us.
Regardless of where we start the investigation, there is a sameness to the participants in neuroethics discussions with a few experts and commercial interests deciding on how the rest of us (however you define ‘us’ as per Gregor Wolbring’s essay) will live.
This paucity of perspectives is something I was getting at in my COVID-19 editorial for the Canadian Science Policy Centre. My thesis being that we need a range of ideas and insights that cannot be culled from small groups of people who’ve trained and read the same materials or entrepreneurs who too often seem to put profit over thoughtful implementations of new technologies. (See the PDF May 2020 edition [you’ll find me under Policy Development]) or see my May 15, 2020 posting here (with all the sources listed.)
As for this new research at Stanford, it’s exciting news, which raises questions, as it offers the hope of independent movement for people diagnosed as tetraplegic (sometimes known as quadriplegic.)