It’s not here yet but there are scientists working on an internet of living things (IoLT). There are some details (see the fourth paragraph from the bottom of the news release excerpt) about how an IoLT would be achieved but it seems these are early days. From a September 9, 2021 University of Illinois news release (also on EurekAlert), Note: Links have been removed,
The National Science Foundation (NSF) announced today an investment of $25 million to launch the Center for Research on Programmable Plant Systems (CROPPS). The center, a partnership among the University of Illinois at Urbana-Champaign, Cornell University, the Boyce Thompson Institute, and the University of Arizona, aims to develop tools to listen and talk to plants and their associated organisms.
“CROPPS will create systems where plants communicate their hidden biology to sensors, optimizing plant growth to the local environment. This Internet of Living Things (IoLT) will enable breakthrough discoveries, offer new educational opportunities, and open transformative opportunities for productive, sustainable, and profitable management of crops,” says Steve Moose (BSD/CABBI/GEGC), the grant’s principal investigator at Illinois. Moose is a genomics professor in the Department of Crop Sciences, part of the College of Agricultural, Consumer and Environmental Sciences (ACES).
As an example of what’s possible, CROPPS scientists could deploy armies of autonomous rovers to monitor and modify crop growth in real time. The researchers created leaf sensors to report on belowground processes in roots. This combination of machine and living sensors will enable completely new ways of decoding the language of plants, allowing researchers to teach plants how to better handle environmental challenges.
“Right now, we’re working to program a circuit that responds to low-nitrogen stress, where the plant growth rate is ‘slowed down’ to give farmers more time to apply fertilizer during the window that is the most efficient at increasing yield,” Moose explains.
With 150+ years of global leadership in crop sciences and agricultural engineering, along with newer transdisciplinary research units such as the National Center for Supercomputing Applications (NCSA) and the Center for Digital Agriculture (CDA), Illinois is uniquely positioned to take on the technical challenges associated with CROPPS.
But U of I scientists aren’t working alone. For years, they’ve collaborated with partner institutions to conceptualize the future of digital agriculture and bring it into reality. For example, researchers at Illinois’ CDA and Cornell’s Initiative for Digital Agriculture jointly proposed the first IoLT for agriculture, laying the foundation for CROPPS.
“CROPPS represents a significant win from having worked closely with our partners at Cornell and other institutions. We’re thrilled to move forward with our colleagues to shift paradigms in agriculture,” says Vikram Adve, Donald B. Gillies Professor in computer science at Illinois and co-director of the CDA.
CROPPS research may sound futuristic, and that’s the point.
The researchers say new tools are needed to make crops productive, flexible, and sustainable enough to feed our growing global population under a changing climate. Many of the tools under development – biotransducers small enough to fit between soil particles, dexterous and highly autonomous field robots, field-applied gene editing nanoparticles, IoLT clouds, and more – have been studied in the proof-of-concept phase, and are ready to be scaled up.
“One of the most exciting goals of CROPPS is to apply recent advances in sensing and data analytics to understand the rules of life, where plants have much to teach us. What we learn will bring a stronger biological dimension to the next phase of digital agriculture,” Moose says.
CROPPS will also foster innovations in STEM [science, technology[ engineering, and mathematics] education through programs that involve students at all levels, and each partner institution will share courses in digital agriculture topics. CROPPS also aims to engage professionals in digital agriculture at any career stage, and learn how the public views innovations in this emerging technology area.
“Along with cutting-edge research, CROPPS coordinated educational programs will address the future of work in plant sciences and agriculture,” says Germán Bollero, associate dean for research in the College of ACES.
An August 25, 2021 news item on ScienceDaily announced research that will allow more direct communication between cells and computers,
Genetically encoded reporter proteins have been a mainstay of biotechnology research, allowing scientists to track gene expression, understand intracellular processes and debug engineered genetic circuits.
But conventional reporting schemes that rely on fluorescence and other optical approaches come with practical limitations that could cast a shadow over the field’s future progress. Now, researchers at the University of Washington and Microsoft have created a “nanopore-tal” into what is happening inside these complex biological systems, allowing scientists to see reporter proteins in a whole new light.
The team introduced a new class of reporter proteins that can be directly read by a commercially available nanopore sensing device. The new system ― dubbed “Nanopore-addressable protein Tags Engineered as Reporters” or “NanoporeTERs” ― can detect multiple protein expression levels from bacterial and human cell cultures far beyond the capacity of existing techniques.
“NanoporeTERs offer a new and richer lexicon for engineered cells to express themselves and shed new light on the factors they are designed to track. They can tell us a lot more about what is happening in their environment all at once,” said co-lead author Nicolas Cardozo, a doctoral student with the UW Molecular Engineering and Sciences Institute. “We’re essentially making it possible for these cells to ‘talk’ to computers about what’s happening in their surroundings at a new level of detail, scale and efficiency that will enable deeper analysis than what we could do before.”
For conventional labeling methods, researchers can track only a few optical reporter proteins, such as green fluorescent protein, simultaneously because of their overlapping spectral properties. For example, it’s difficult to distinguish between more than three different colors of fluorescent proteins at once. In contrast, NanoporeTERs were designed to carry distinct protein “barcodes” composed of strings of amino acids that, when used in combination, allow at least ten times more multiplexing possibilities.
These synthetic proteins are secreted outside of a cell into the surrounding environment, where researchers can collect and analyze them using a commercially available nanopore array. Here, the team used the Oxford Nanopore Technologies MinION device.
The researchers engineered the NanoporeTER proteins with charged “tails” so that they can be pulled into the nanopore sensors by an electric field. Then the team uses machine learning to classify the electrical signals for each NanoporeTER barcode in order to determine each protein’s output levels.
“This is a fundamentally new interface between cells and computers,” said senior author Jeff Nivala, a UW research assistant professor in the Paul G. Allen School of Computer Science & Engineering. “One analogy I like to make is that fluorescent protein reporters are like lighthouses, and NanoporeTERs are like messages in a bottle.
“Lighthouses are really useful for communicating a physical location, as you can literally see where the signal is coming from, but it’s hard to pack more information into that kind of signal. A message in a bottle, on the other hand, can pack a lot of information into a very small vessel, and you can send many of them off to another location to be read. You might lose sight of the precise physical location where the messages were sent, but for many applications that’s not going to be an issue.”
As a proof of concept, the team developed a library of more than 20 distinct NanoporeTERs tags. But the potential is significantly greater, according to co-lead author Karen Zhang, now a doctoral student in the UC Berkeley-UCSF bioengineering graduate program.
“We are currently working to scale up the number of NanoporeTERs to hundreds, thousands, maybe even millions more,” said Zhang, who graduated this year from the UW with bachelor’s degrees in both biochemistry and microbiology. “The more we have, the more things we can track.
“We’re particularly excited about the potential in single-cell proteomics, but this could also be a game-changer in terms of our ability to do multiplexed biosensing to diagnose disease and even target therapeutics to specific areas inside the body. And debugging complicated genetic circuit designs would become a whole lot easier and much less time-consuming if we could measure the performance of all the components in parallel instead of by trial and error.”
These researchers have made novel use of the MinION device before, when they developed a molecular tagging system to replace conventional inventory control methods. That system relied on barcodes comprising synthetic strands of DNA that could be decoded on demand using the portable reader.
This time, the team went a step farther.
“This is the first paper to show how a commercial nanopore sensor device can be repurposed for applications other than the DNA and RNA sequencing for which they were originally designed,” said co-author Kathryn Doroschak, a computational biologist at Adaptive Biotechnologies who completed this work as a doctoral student at the Allen School. “This is exciting as a precursor for nanopore technology becoming more accessible and ubiquitous in the future. You can already plug a nanopore device into your cell phone. I could envision someday having a choice of ‘molecular apps’ that will be relatively inexpensive and widely available outside of traditional genomics.”
Additional co-authors of the paper are Aerilynn Nguyen at Northeastern University and Zoheb Siddiqui at Amazon, both former UW undergraduate students; Nicholas Bogard at Patch Biosciences, a former UW postdoctoral research associate; Luis Ceze, an Allen School professor; and Karin Strauss, an Allen School affiliate professor and a senior principal research manager at Microsoft. This research was funded by the National Science Foundation, the National Institutes of Health and a sponsored research agreement from Oxford Nanopore Technologies.
I’ve been wondering when there’d be more ‘public engagement’ discussion in Canada about artificial intelligence, human genome editing, robotics, and other technologies which are rapidly changing status from ’emerging technologies’ to ’embedded technologies’.
In October of 2020, Jennifer Doudna and Emmanuelle Charpentier were awarded the Nobel Prize in chemistry for their discovery of an adaptable, easy way to edit genomes, known as CRISPR [clustered regularly interspaced short palindromic repeats], which has transformed the world of genetic engineering.
CRISPR has been used to fight lung cancer and correct the mutation responsible for sickle cell anemia in stem cells. But the technology was also used by a Chinese scientist to secretly and illegally edit the genomes of twin girls — the first-ever heritable mutation of the human germline made with genetic engineering.
“We’ve moved away from an era of science where we understood the risks that came with new technology and where decision stakes were fairly low,” says Dietram Scheufele, a professor of life sciences communication at the University of Wisconsin-Madison.
Today, Scheufele and his colleagues say, we’re in a world where new technologies have very immediate and sometimes unpredictable but significant impacts on society. In a paper published the week of April 26  in the Proceedings of the National Academy of Sciences [PNAS], the researchers argue that such advanced tech, especially CRISPR, demands more robust and thoughtful public engagement if it is to be harnessed to benefit the public without crossing ethical lines.
The authors say that being thoughtful and transparent about public engagement goals and using evidence from social science can help facilitate the difficult conversations society must have about scientific issues like CRISPR and their societal implications. Effective public engagement, in turn, lays the groundwork for public ownership of advances that do arise from CRISPR.
Life sciences communication Professor Dominique Brossard and graduate student Nicole Krause, along with University of Vienna research assistant Isabelle Freiling, co-authored the report with Scheufele. The paper stems from a 2019 National Academy of Sciences colloquium on CRISPR.
Since 2012, when the CRISPR system was first described, scientists have understood both its genetic engineering potential and the need for public engagement to discuss the possible uses of the technology. Many scientists wanted to avoid rehashing the controversies surrounding genetically modified organisms, which have been harshly criticized as unnatural and unnecessary by some activists despite broad scientific support for their use.
Yet, Krause says, some scientists who supported using CRISPR began by errantly repeating the public engagement methods employed for GMOs, which “assumes that people just need more knowledge, more of an ability to understand the science.” Instead, Krause adds: “Solutions focused on tailoring communications to people’s values would make more sense.”
This values-based public engagement strategy is supported by social science research into how people form and change their opinions around new technologies. Some public engagement methods engage value systems, and encourage thoughtful conversation, more than others.
For example, what researchers term “public involvement” and “public collaboration” are methods of two-way communication involving the joint exchange of information and values and the identification and design of science-based decisions that adhere to those values. That contrasts with “public communication,” which focuses only on the dissemination of scientific information.
Scheufele and his colleagues say that such collaborative approaches could help scientists widen the representation of voices in debates around science to groups who are often overlooked, such as people with disabilities or racial minorities.
“As the scientific community, we don’t have a long track record of effective engagement mechanism with these communities,” says Scheufele. This failure to reach broader groups stems in part from the low participation rates of most science engagement events, which also attract highly selective audiences.
Another challenge is rewarding scientists for public engagement. “There’s very little incentive in academia to do this kind of work,” says Scheufele.
A recent report by Brossard and others found that a majority of land-grant faculty felt that public engagement was very important, but believed it was less important to their colleagues. That divide suggests scientists feel their engagement efforts won’t be rewarded by their peers, says Brossard.
Now, Brossard, Krause, Scheufele and colleagues have a grant from the National Science Foundation to research how to depolarize debates around CRISPR. Previous studies suggest that making people accountable for their positions helps them think more critically about their underlying reasoning. And when social scientists emphasize the complexity inherent in people’s values, it helps people consider controversial issues with more nuance.
But engaging a diverse society with pluralistic value systems in deliberations on the latest technologies will never be easy.
“The policymaking process involves a lot more than just science. Science will inform how we regulate technologies, and so will religious, political, ethical, regulatory and economic considerations,” says Scheufele. “And so the ability to actually do engagement in this much broader setting where we meaningfully contribute and guide the debate with the best available science is a major challenge.”
Gold stars for everyone who recognized the loose paraphrasing of the title, Love in the Time of Cholera, for Gabrial Garcia Marquez’s 1985 novel.
I wrote my headline and first paragraph yesterday and found this in my email box this morning, from a March 25, 2020 University of British Columbia news release, which compares times, diseases, and scares of the past with today’s COVID-19 (Perhaps politicians and others could read this piece and stop using the word ‘unprecedented’ when discussing COVID-19?),
How globalization stoked fear of disease during the Romantic era
In the late 18th and early 19th centuries, the word “communication” had several meanings. People used it to talk about both media and the spread of disease, as we do today, but also to describe transport—via carriages, canals and shipping.
Miranda Burgess, an associate professor in UBC’s English department, is working on a book called Romantic Transport that covers these forms of communication in the Romantic era and invites some interesting comparisons to what the world is going through today.
We spoke with her about the project.
What is your book about?
It’s about global infrastructure at the dawn of globalization—in particular the extension of ocean navigation through man-made inland waterways like canals and ship’s canals. These canals of the late 18th and early 19th century were like today’s airline routes, in that they brought together places that were formerly understood as far apart, and shrunk time because they made it faster to get from one place to another.
This book is about that history, about the fears that ordinary people felt in response to these modernizations, and about the way early 19th-century poets and novelists expressed and responded to those fears.
What connections did those writers make between transportation and disease?
In the 1810s, they don’t have germ theory yet, so there’s all kinds of speculation about how disease happens. Works of tropical medicine, which is rising as a discipline, liken the human body to the surface of the earth. They talk about nerves as canals that convey information from the surface to the depths, and the idea that somehow disease spreads along those pathways.
When the canals were being built, some writers opposed them on the grounds that they could bring “strangers” through the heart of the city, and that standing water would become a breeding ground for disease. Now we worry about people bringing disease on airplanes. It’s very similar to that.
What was the COVID-19 of that time?
Probably epidemic cholera [emphasis mine], from about the 1820s onward. The Quarterly Review, a journal that novelist Walter Scott was involved in editing, ran long articles that sought to trace the map of cholera along rivers from South Asia, to Southeast Asia, across Europe and finally to Britain. And in the way that its spread is described, many of the same fears that people are evincing now about COVID-19 were visible then, like the fear of clothes. Is it in your clothes? Do we have to burn our clothes? People were concerned.
What other comparisons can be drawn between those times and what is going on now?
Now we worry about the internet and “fake news.” In the 19th century, they worried about what William Wordsworth called “the rapid communication of intelligence,” which was the daily newspaper. Not everybody had access to newspapers, but each newspaper was read by multiple families and newspapers were available in taverns and coffee shops. So if you were male and literate, you had access to a newspaper, and quite a lot of women did, too.
Paper was made out of rags—discarded underwear. Because of the French Revolution and Napoleonic Wars that followed, France blockaded Britain’s coast and there was a desperate shortage of rags to make paper, which had formerly come from Europe. And so Britain started to import rags from the Caribbean that had been worn by enslaved people.
Papers of the time are full of descriptions of the high cost of rags, how they’re getting their rags from prisons, from prisoners’ underwear, and fear about the kinds of sweat and germs that would have been harboured in those rags—and also discussions of scarcity, as people stole and hoarded those rags. It rings very well with what the internet is telling us now about a bunch of things around COVID-19.
Pietsch, who is also curator emeritus of fishes at the Burke Museum of Natural History and Culture, has published over 200 articles and a dozen books on the biology and behavior of marine fishes. He wrote this book with Rachel J. Arnold, a faculty member at Northwest Indian College in Bellingham and its Salish Sea Research Center.
These walking fishes have stepped into the spotlight lately, with interest growing in recent decades. And though these predatory fishes “will almost certainly devour anything else that moves in a home aquarium,” Pietsch writes, “a cadre of frogfish aficionados around the world has grown within the dive community and among aquarists.” In fact, Pietsch said, there are three frogfish public groups on Facebook, with more than 6,000 members.
First, what is a frogfish?
Ted Pietsch: A member of a family of bony fishes, containing 52 species, all of which are highly camouflaged and whose feeding strategy consists of mimicking the immobile, inert, and benign appearance of a sponge or an algae-encrusted rock, while wiggling a highly conspicuous lure to attract prey.
This is a fish that “walks” and “hops” across the sea bottom, and clambers about over rocks and coral like a four-legged terrestrial animal but, at the same time, can jet-propel itself through open water. Some lay their eggs encapsulated in a complex, floating, mucus mass, called an “egg raft,” while some employ elaborate forms of parental care, carrying their eggs around until they hatch.
They are among the most colorful of nature’s productions, existing in nearly every imaginable color and color pattern, with an ability to completely alter their color and pattern in a matter of days or seconds. All these attributes combined make them one of the most intriguing groups of aquatic vertebrates for the aquarist, diver, and underwater photographer as well as the professional zoologist.
I couldn’t resist the ‘frog’ reference and I’m glad since this is a good read with a number of fascinating photographs and illustrations.,
A March 24, 2020 news item on phys.org features the future of building construction as perceived by synthetic biologists,
Buildings are not unlike a human body. They have bones and skin; they breathe. Electrified, they consume energy, regulate temperature and generate waste. Buildings are organisms—albeit inanimate ones.
But what if buildings—walls, roofs, floors, windows—were actually alive—grown, maintained and healed by living materials? Imagine architects using genetic tools that encode the architecture of a building right into the DNA of organisms, which then grow buildings that self-repair, interact with their inhabitants and adapt to the environment.
A March 23, 2020 essay by Wil Srubar (Professor of Architectural Engineering and Materials Science, University of Colorado Boulder), which originated the news item, provides more insight,
Living architecture is moving from the realm of science fiction into the laboratory as interdisciplinary teams of researchers turn living cells into microscopic factories. At the University of Colorado Boulder, I lead the Living Materials Laboratory. Together with collaborators in biochemistry, microbiology, materials science and structural engineering, we use synthetic biology toolkits to engineer bacteria to create useful minerals and polymers and form them into living building blocks that could, one day, bring buildings to life.
In our most recent work, published in Matter, we used photosynthetic cyanobacteria to help us grow a structural building material – and we kept it alive. Similar to algae, cyanobacteria are green microorganisms found throughout the environment but best known for growing on the walls in your fish tank. Instead of emitting CO2, cyanobacteria use CO2 and sunlight to grow and, in the right conditions, create a biocement, which we used to help us bind sand particles together to make a living brick.
By keeping the cyanobacteria alive, we were able to manufacture building materials exponentially. We took one living brick, split it in half and grew two full bricks from the halves. The two full bricks grew into four, and four grew into eight. Instead of creating one brick at a time, we harnessed the exponential growth of bacteria to grow many bricks at once – demonstrating a brand new method of manufacturing materials.
Researchers have only scratched the surface of the potential of engineered living materials. Other organisms could impart other living functions to material building blocks. For example, different bacteria could produce materials that heal themselves, sense and respond to external stimuli like pressure and temperature, or even light up. If nature can do it, living materials can be engineered to do it, too.
It also take less energy to produce living buildings than standard ones. Making and transporting today’s building materials uses a lot of energy and emits a lot of CO2. For example, limestone is burned to make cement for concrete. Metals and sand are mined and melted to make steel and glass. The manufacture, transport and assembly of building materials account for 11% of global CO2 emissions. Cement production alone accounts for 8%. In contrast, some living materials, like our cyanobacteria bricks, could actually sequester CO2.
The field of engineered living materials is in its infancy, and further research and development is needed to bridge the gap between laboratory research and commercial availability. Challenges include cost, testing, certification and scaling up production. Consumer acceptance is another issue. For example, the construction industry has a negative perception of living organisms. Think mold, mildew, spiders, ants and termites. We’re hoping to shift that perception. Researchers working on living materials also need to address concerns about safety and biocontamination.
The [US] National Science Foundation recently named engineered living materials one of the country’s key research priorities. Synthetic biology and engineered living materials will play a critical role in tackling the challenges humans will face in the 2020s and beyond: climate change, disaster resilience, aging and overburdened infrastructure, and space exploration.
If you have time and interest, this is fascinating. Strubar is a little exuberant and, at this point, I welcome it.
With the significant part of the global population forced to work from home, the occurrence of lower back pain may increase. Lithuanian scientists have devised a spinal stabilisation exercise programme for managing lower back pain for people who perform a sedentary job. After testing the programme with 70 volunteers, the researchers have found that the exercises are not only efficient in diminishing the non-specific lower back pain, but their effect lasts 3 times longer than that of a usual muscle strengthening exercise programme.
According to the World Health Organisation, lower back pain is among the top 10 diseases and injuries that are decreasing the quality of life across the global population. It is estimated that non-specific low back pain is experienced by 60% to 70% of people in industrialised societies. Moreover, it is the leading cause of activity limitation and work absence throughout much of the world. For example, in the United Kingdom, low back pain causes more than 100 million workdays lost per year, in the United States – an estimated 149 million.
Chronic lower back pain, which starts from long-term irritation or nerve injury affects the emotions of the afflicted. Anxiety, bad mood and even depression, also the malfunctioning of the other bodily systems – nausea, tachycardia, elevated arterial blood pressure – are among the conditions, which may be caused by lower back pain.
During the coronavirus disease (COVID-19) outbreak, with a significant part of the global population working from home and not always having a properly designed office space, the occurrence of lower back pain may increase.
“Lower back pain is reaching epidemic proportions. Although it is usually clear what is causing the pain and its chronic nature, people tend to ignore these circumstances and are not willing to change their lifestyle. Lower back pain usually comes away itself, however, the chances of the recurring pain are very high”, says Dr Irina Klizienė, a researcher at Kaunas University of Technology (KTU) Faculty of Social Sciences, Humanities and Arts.
Dr Klizienė, together with colleagues from KTU and from Lithuanian Sports University has designed a set of stabilisation exercises aimed at strengthening the muscles which support the spine at the lower back, i.e. lumbar area. The exercise programme is based on Pilates methodology.
According to Dr Klizienė, the stability of lumbar segments is an essential element of body biomechanics. Previous research evidence shows that in order to avoid the lower back pain it is crucial to strengthen the deep muscles, which are stabilising the lumbar area of the spine. One of these muscles is multifidus muscle.
“Human central nervous system is using several strategies, such as preparing for keeping the posture, preliminary adjustment to the posture, correcting the mistakes of the posture, which need to be rectified by specific stabilising exercises. Our aim was to design a set of exercises for this purpose”, explains Dr Klizienė.
The programme, designed by Dr Klizienė and her colleagues is comprised of static and dynamic exercises, which train the muscle strength and endurance. The static positions are to be held from 6 to 20 seconds; each exercise to be repeated 8 to 16 times.
The previous set is a little puzzling but perhaps you’ll find these ones below easier to follow,
I think more pictures of intervening moves would have been useful. Now. getting back to the press release,
In order to check the efficiency of the programme, 70 female volunteers were randomly enrolled either to the lumbar stabilisation exercise programme or to a usual muscle strengthening exercise programme. Both groups were exercising twice a week for 45 minutes for 20 weeks. During the experiment, ultrasound scanning of the muscles was carried out.
As soon as 4 weeks in lumbar stabilisation programme, it was observed that the cross-section area of the multifidus muscle of the subjects of the stabilisation group has increased; after completing the programme, this increase was statistically significant (p < 0,05). This change was not observed in the strengthening group.
Moreover, although both sets of exercises were efficient in eliminating lower back pain and strengthening the muscles of the lower back area, the effect of stabilisation exercises lasted 3 times longer – 12 weeks after the completion of the stabilisation programme against 4 weeks after the completion of the muscle strengthening programme.
“There are only a handful of studies, which have directly compared the efficiency of stabilisation exercises against other exercises in eliminating lower back pain”, says Dr Klizienė, “however, there are studies proving that after a year, lower back pain returned only to 30% of people who have completed a stabilisation exercise programme, and to 84% of people who haven’t taken these exercises. After three years these proportions are 35% and 75%.”
According to her, research shows that the spine stabilisation exercises are more efficient than medical intervention or usual physical activities in curing the lower back pain and avoiding the recurrence of the symptoms in the future.
I have briefly speculated about the importance of touch elsewhere (see my July 19, 2019 posting regarding BlocKit and blockchain; scroll down about 50% of the way) but this upcoming news bit and the one following it put a different spin on the importance of touch.
Robots and prosthetic devices may soon have a sense of touch equivalent to, or better than, the human skin with the Asynchronous Coded Electronic Skin (ACES), an artificial nervous system developed by a team of researchers at the National University of Singapore (NUS).
The new electronic skin system achieved ultra-high responsiveness and robustness to damage, and can be paired with any kind of sensor skin layers to function effectively as an electronic skin.
The innovation, achieved by Assistant Professor Benjamin Tee and his team from the Department of Materials Science and Engineering at the NUS Faculty of Engineering, was first reported in prestigious scientific journal Science Robotics on 18 July 2019.
Faster than the human sensory nervous system
“Humans use our sense of touch to accomplish almost every daily task, such as picking up a cup of coffee or making a handshake. Without it, we will even lose our sense of balance when walking. Similarly, robots need to have a sense of touch in order to interact better with humans, but robots today still cannot feel objects very well,” explained Asst Prof Tee, who has been working on electronic skin technologies for over a decade in hope of giving robots and prosthetic devices a better sense of touch.
Drawing inspiration from the human sensory nervous system, the NUS team spent a year and a half developing a sensor system that could potentially perform better. While the ACES electronic nervous system detects signals like the human sensor nervous system, it is made up of a network of sensors connected via a single electrical conductor, unlike the nerve bundles in the human skin. It is also unlike existing electronic skins which have interlinked wiring systems that can make them sensitive to damage and difficult to scale up.
Elaborating on the inspiration, Asst Prof Tee, who also holds appointments in the NUS Department of Electrical and Computer Engineering, NUS Institute for Health Innovation & Technology (iHealthTech), N.1 Institute for Health and the Hybrid Integrated Flexible Electronic Systems (HiFES) programme, said, “The human sensory nervous system is extremely efficient, and it works all the time to the extent that we often take it for granted. It is also very robust to damage. Our sense of touch, for example, does not get affected when we suffer a cut. If we can mimic how our biological system works and make it even better, we can bring about tremendous advancements in the field of robotics where electronic skins are predominantly applied.”
ACES can detect touches more than 1,000 times faster than the human sensory nervous system. For example, it is capable of differentiating physical contacts between different sensors in less than 60 nanoseconds – the fastest ever achieved for an electronic skin technology – even with large numbers of sensors. ACES-enabled skin can also accurately identify the shape, texture and hardness of objects within 10 milliseconds, ten times faster than the blinking of an eye. This is enabled by the high fidelity and capture speed of the ACES system.
The ACES platform can also be designed to achieve high robustness to physical damage, an important property for electronic skins because they come into the frequent physical contact with the environment. Unlike the current system used to interconnect sensors in existing electronic skins, all the sensors in ACES can be connected to a common electrical conductor with each sensor operating independently. This allows ACES-enabled electronic skins to continue functioning as long as there is one connection between the sensor and the conductor, making them less vulnerable to damage.
Smart electronic skins for robots and prosthetics
ACES’ simple wiring system and remarkable responsiveness even with increasing numbers of sensors are key characteristics that will facilitate the scale-up of intelligent electronic skins for Artificial Intelligence (AI) applications in robots, prosthetic devices and other human machine interfaces.
“Scalability is a critical consideration as big pieces of high performing electronic skins are required to cover the relatively large surface areas of robots and prosthetic devices,” explained Asst Prof Tee. “ACES can be easily paired with any kind of sensor skin layers, for example, those designed to sense temperatures and humidity, to create high performance ACES-enabled electronic skin with an exceptional sense of touch that can be used for a wide range of purposes,” he added.
For instance, pairing ACES with the transparent, self-healing and water-resistant sensor skin layer also recently developed by Asst Prof Tee’s team, creates an electronic skin that can self-repair, like the human skin. This type of electronic skin can be used to develop more realistic prosthetic limbs that will help disabled individuals restore their sense of touch.
Other potential applications include developing more intelligent robots that can perform disaster recovery tasks or take over mundane operations such as packing of items in warehouses. The NUS team is therefore looking to further apply the ACES platform on advanced robots and prosthetic devices in the next phase of their research.
For those who like videos, the researchers have prepared this,
The West Valley City, Utah, real estate agent [Keven Walgamott] lost his left hand in an electrical accident 17 years ago. Since then, he’s tried out a few different prosthetic limbs, but always found them too clunky and uncomfortable.
Then he decided to work with the University of Utah in 2016 to test out new prosthetic technology that mimics the sensation of human touch, allowing Walgamott to perform delicate tasks with precision — including shaking his wife’s hand.
“I extended my left hand, she came and extended hers, and we were able to feel each other with the left hand for the first time in 13 years, and it was just a marvellous and wonderful experience,” Walgamott told As It Happens guest host Megan Williams.
Walgamott, one of seven participants in the University of Utah study, was able to use an advanced prosthetic hand called the LUKE Arm to pick up an egg without cracking it, pluck a single grape from a bunch, hammer a nail, take a ring on and off his finger, fit a pillowcase over a pillow and more.
While performing the tasks, Walgamott was able to actually feel the items he was holding and correctly gauge the amount of pressure he needed to exert — mimicking a process the human brain does automatically.
“I was able to feel something in each of my fingers,” he said. “What I feel, I guess the easiest way to explain it, is little electrical shocks.”
Those shocks — which he describes as a kind of a tingling sensation — intensify as he tightens his grip.
“Different variations of the intensity of the electricity as I move my fingers around and as I touch things,” he said.
To make that [sense of touch] happen, the researchers implanted electrodes into the nerves on Walgamott’s forearm, allowing his brain to communicate with his prosthetic through a computer outside his body. That means he can move the hand just by thinking about it.
But those signals also work in reverse.
The team attached sensors to the hand of a LUKE Arm. Those sensors detect touch and positioning, and send that information to the electrodes so it can be interpreted by the brain.
For Walgamott, performing a series of menial tasks as a team of scientists recorded his progress was “fun to do.”
“I’d forgotten how well two hands work,” he said. “That was pretty cool.”
But it was also a huge relief from the phantom limb pain he has experienced since the accident, which he describes as a “burning sensation” in the place where his hand used to be.
Keven Walgamott had a good “feeling” about picking up the egg without crushing it.
What seems simple for nearly everyone else can be more of a Herculean task for Walgamott, who lost his left hand and part of his arm in an electrical accident 17 years ago. But he was testing out the prototype of a high-tech prosthetic arm with fingers that not only can move, they can move with his thoughts. And thanks to a biomedical engineering team at the University of Utah, he “felt” the egg well enough so his brain could tell the prosthetic hand not to squeeze too hard.
That’s because the team, led by U biomedical engineering associate professor Gregory Clark, has developed a way for the “LUKE Arm” (so named after the robotic hand that Luke Skywalker got in “The Empire Strikes Back”) to mimic the way a human hand feels objects by sending the appropriate signals to the brain. Their findings were published in a new paper co-authored by U biomedical engineering doctoral student Jacob George, former doctoral student David Kluger, Clark and other colleagues in the latest edition of the journal Science Robotics. A copy of the paper may be obtained by emailing firstname.lastname@example.org.
“We changed the way we are sending that information to the brain so that it matches the human body. And by matching the human body, we were able to see improved benefits,” George says. “We’re making more biologically realistic signals.”
That means an amputee wearing the prosthetic arm can sense the touch of something soft or hard, understand better how to pick it up and perform delicate tasks that would otherwise be impossible with a standard prosthetic with metal hooks or claws for hands.
“It almost put me to tears,” Walgamott says about using the LUKE Arm for the first time during clinical tests in 2017. “It was really amazing. I never thought I would be able to feel in that hand again.”
Walgamott, a real estate agent from West Valley City, Utah, and one of seven test subjects at the U, was able to pluck grapes without crushing them, pick up an egg without cracking it and hold his wife’s hand with a sensation in the fingers similar to that of an able-bodied person.
“One of the first things he wanted to do was put on his wedding ring. That’s hard to do with one hand,” says Clark. “It was very moving.”
Those things are accomplished through a complex series of mathematical calculations and modeling.
The LUKE Arm
The LUKE Arm has been in development for some 15 years. The arm itself is made of mostly metal motors and parts with a clear silicon “skin” over the hand. It is powered by an external battery and wired to a computer. It was developed by DEKA Research & Development Corp., a New Hampshire-based company founded by Segway inventor Dean Kamen.
Meanwhile, the U’s team has been developing a system that allows the prosthetic arm to tap into the wearer’s nerves, which are like biological wires that send signals to the arm to move. It does that thanks to an invention by U biomedical engineering Emeritus Distinguished Professor Richard A. Normann called the Utah Slanted Electrode Array. The array is a bundle of 100 microelectrodes and wires that are implanted into the amputee’s nerves in the forearm and connected to a computer outside the body. The array interprets the signals from the still-remaining arm nerves, and the computer translates them to digital signals that tell the arm to move.
But it also works the other way. To perform tasks such as picking up objects requires more than just the brain telling the hand to move. The prosthetic hand must also learn how to “feel” the object in order to know how much pressure to exert because you can’t figure that out just by looking at it.
First, the prosthetic arm has sensors in its hand that send signals to the nerves via the array to mimic the feeling the hand gets upon grabbing something. But equally important is how those signals are sent. It involves understanding how your brain deals with transitions in information when it first touches something. Upon first contact of an object, a burst of impulses runs up the nerves to the brain and then tapers off. Recreating this was a big step.
“Just providing sensation is a big deal, but the way you send that information is also critically important, and if you make it more biologically realistic, the brain will understand it better and the performance of this sensation will also be better,” says Clark.
To achieve that, Clark’s team used mathematical calculations along with recorded impulses from a primate’s arm to create an approximate model of how humans receive these different signal patterns. That model was then implemented into the LUKE Arm system.
In addition to creating a prototype of the LUKE Arm with a sense of touch, the overall team is already developing a version that is completely portable and does not need to be wired to a computer outside the body. Instead, everything would be connected wirelessly, giving the wearer complete freedom.
Clark says the Utah Slanted Electrode Array is also capable of sending signals to the brain for more than just the sense of touch, such as pain and temperature, though the paper primarily addresses touch. And while their work currently has only involved amputees who lost their extremities below the elbow, where the muscles to move the hand are located, Clark says their research could also be applied to those who lost their arms above the elbow.
Clark hopes that in 2020 or 2021, three test subjects will be able to take the arm home to use, pending federal regulatory approval.
The research involves a number of institutions including the U’s Department of Neurosurgery, Department of Physical Medicine and Rehabilitation and Department of Orthopedics, the University of Chicago’s Department of Organismal Biology and Anatomy, the Cleveland Clinic’s Department of Biomedical Engineering and Utah neurotechnology companies Ripple Neuro LLC and Blackrock Microsystems. The project is funded by the Defense Advanced Research Projects Agency and the National Science Foundation.
“This is an incredible interdisciplinary effort,” says Clark. “We could not have done this without the substantial efforts of everybody on that team.”
A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.
The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.
“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”
Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.
“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”
The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.
Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.
“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”
To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.
“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”
The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.
In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.
Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.
Huang hopes people will download the freely available source code and try it for themselves. “We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.
Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.
This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.
When you think of robotics, you likely think of something rigid, heavy, and built for a specific purpose. New “Robotic Skins” technology developed by Yale researchers flips that notion on its head, allowing users to animate the inanimate and turn everyday objects into robots.
The skins are made from elastic sheets embedded with sensors and actuators developed in Kramer-Bottiglio’s lab. Placed on a deformable object — a stuffed animal or a foam tube, for instance — the skins animate these objects from their surfaces. The makeshift robots can perform different tasks depending on the properties of the soft objects and how the skins are applied.
“We can take the skins and wrap them around one object to perform a task — locomotion, for example — and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” she said. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”
Robots are typically built with a single purpose in mind. The robotic skins, however, allow users to create multi-functional robots on the fly. That means they can be used in settings that hadn’t even been considered when they were designed, said Kramer-Bottiglio.
Additionally, using more than one skin at a time allows for more complex movements. For instance, Kramer-Bottiglio said, you can layer the skins to get different types of motion. “Now we can get combined modes of actuation — for example, simultaneous compression and bending.”
To demonstrate the robotic skins in action, the researchers created a handful of prototypes. These include foam cylinders that move like an inchworm, a shirt-like wearable device designed to correct poor posture, and a device with a gripper that can grasp and move objects.
Kramer-Bottiglio said she came up with the idea for the devices a few years ago when NASA [US National Aeronautics and Space Administration] put out a call for soft robotic systems. The technology was designed in partnership with NASA, and its multifunctional and reusable nature would allow astronauts to accomplish an array of tasks with the same reconfigurable material. The same skins used to make a robotic arm out of a piece of foam could be removed and applied to create a soft Mars rover that can roll over rough terrain. With the robotic skins on board, the Yale scientist said, anything from balloons to balls of crumpled paper could potentially be made into a robot with a purpose.
“One of the main things I considered was the importance of multifunctionality, especially for deep space exploration where the environment is unpredictable,” she said. “The question is: How do you prepare for the unknown unknowns?”
For the same line of research, Kramer-Bottiglio was recently awarded a $2 million grant from the National Science Foundation, as part of its Emerging Frontiers in Research and Innovation program.
Next, she said, the lab will work on streamlining the devices and explore the possibility of 3D printing the components.
Just in case the link to the paper becomes obsolete, here’s a citation for the paper,
One of my earliest posts featuring memristors (May 9, 2008) focused on their potential for energy savings but since then most of my postings feature research into their application in the field of neuromorphic (brainlike) computing. (For a description and abbreviated history of the memristor go to this page on my Nanotech Mysteries Wiki.)
A new way of arranging advanced computer components called memristors on a chip could enable them to be used for general computing, which could cut energy consumption by a factor of 100.
This would improve performance in low power environments such as smartphones or make for more efficient supercomputers, says a University of Michigan researcher.
“Historically, the semiconductor industry has improved performance by making devices faster. But although the processors and memories are very fast, they can’t be efficient because they have to wait for data to come in and out,” said Wei Lu, U-M professor of electrical and computer engineering and co-founder of memristor startup Crossbar Inc.
Memristors might be the answer. Named as a portmanteau of memory and resistor, they can be programmed to have different resistance states–meaning they store information as resistance levels. These circuit elements enable memory and processing in the same device, cutting out the data transfer bottleneck experienced by conventional computers in which the memory is separate from the processor.
… unlike ordinary bits, which are 1 or 0, memristors can have resistances that are on a continuum. Some applications, such as computing that mimics the brain (neuromorphic), take advantage of the analog nature of memristors. But for ordinary computing, trying to differentiate among small variations in the current passing through a memristor device is not precise enough for numerical calculations.
Lu and his colleagues got around this problem by digitizing the current outputs—defining current ranges as specific bit values (i.e., 0 or 1). The team was also able to map large mathematical problems into smaller blocks within the array, improving the efficiency and flexibility of the system.
Computers with these new blocks, which the researchers call “memory-processing units,” could be particularly useful for implementing machine learning and artificial intelligence algorithms. They are also well suited to tasks that are based on matrix operations, such as simulations used for weather prediction. The simplest mathematical matrices, akin to tables with rows and columns of numbers, can map directly onto the grid of memristors.
The memristor array situated on a circuit board. Credit: Mohammed Zidan, Nanoelectronics group, University of Michigan.
Once the memristors are set to represent the numbers, operations that multiply and sum the rows and columns can be taken care of simultaneously, with a set of voltage pulses along the rows. The current measured at the end of each column contains the answers. A typical processor, in contrast, would have to read the value from each cell of the matrix, perform multiplication, and then sum up each column in series.
“We get the multiplication and addition in one step. It’s taken care of through physical laws. We don’t need to manually multiply and sum in a processor,” Lu said.
His team chose to solve partial differential equations as a test for a 32×32 memristor array—which Lu imagines as just one block of a future system. These equations, including those behind weather forecasting, underpin many problems science and engineering but are very challenging to solve. The difficulty comes from the complicated forms and multiple variables needed to model physical phenomena.
When solving partial differential equations exactly is impossible, solving them approximately can require supercomputers. These problems often involve very large matrices of data, so the memory-processor communication bottleneck is neatly solved with a memristor array. The equations Lu’s team used in their demonstration simulated a plasma reactor, such as those used for integrated circuit fabrication.
This work is described in a study, “A general memristor-based partial differential equation solver,” published in the journal Nature Electronics.
It was supported by the Defense Advanced Research Projects Agency (DARPA) (grant no. HR0011-17-2-0018) and by the National Science Foundation (NSF) (grant no. CCF-1617315).
This injectable bandage could be a gamechanger (as they say) if it can be taken beyond the ‘in vitro’ (i.e., petri dish) testing stage. A May 22, 2018 news item on Nanowerk makes the announcement (Note: A link has been removed),
While several products are available to quickly seal surface wounds, rapidly stopping fatal internal bleeding has proven more difficult. Now researchers from the Department of Biomedical Engineering at Texas A&M University are developing an injectable hydrogel bandage that could save lives in emergencies such as penetrating shrapnel wounds on the battlefield (Acta Biomaterialia, “Nanoengineered injectable hydrogels for wound healing application”).
The researchers combined a hydrogel base (a water-swollen polymer) and nanoparticles that interact with the body’s natural blood-clotting mechanism. “The hydrogel expands to rapidly fill puncture wounds and stop blood loss,” explained Akhilesh Gaharwar, Ph.D., assistant professor and senior investigator on the work. “The surface of the nanoparticles attracts blood platelets that become activated and start the natural clotting cascade of the body.”
Enhanced clotting when the nanoparticles were added to the hydrogel was confirmed by standard laboratory blood clotting tests. Clotting time was reduced from eight minutes to six minutes when the hydrogel was introduced into the mixture. When nanoparticles were added, clotting time was significantly reduced, to less than three minutes.
In addition to the rapid clotting mechanism of the hydrogel composite, the engineers took advantage of special properties of the nanoparticle component. They found they could use the electric charge of the nanoparticles to add growth factors that efficiently adhered to the particles. “Stopping fatal bleeding rapidly was the goal of our work,” said Gaharwar. “However, we found that we could attach growth factors to the nanoparticles. This was an added bonus because the growth factors act to begin the body’s natural wound healing process—the next step needed after bleeding has stopped.”
The researchers were able to attach vascular endothelial growth factor (VEGF) to the nanoparticles. They tested the hydrogel/nanoparticle/VEGF combination in a cell culture test that mimics the wound healing process. The test uses a petri dish with a layer of endothelial cells on the surface that create a solid skin-like sheet. The sheet is then scratched down the center creating a rip or hole in the sheet that resembles a wound.
When the hydrogel containing VEGF bound to the nanoparticles was added to the damaged endothelial cell wound, the cells were induced to grow back and fill-in the scratched region—essentially mimicking the healing of a wound.
“Our laboratory experiments have verified the effectiveness of the hydrogel for initiating both blood clotting and wound healing,” said Gaharwar. “We are anxious to begin tests in animals with the hope of testing and eventual use in humans where we believe our formulation has great potential to have a significant impact on saving lives in critical situations.”
The work was funded by grant EB023454 from the National Institute of Biomedical Imaging and Bioengineering (NIBIB), and the National Science Foundation. The results were reported in the February issue of the journal Acta Biomaterialia.
A penetrating injury from shrapnel is a serious obstacle in overcoming battlefield wounds that can ultimately lead to death.Given the high mortality rates due to hemorrhaging, there is an unmet need to quickly self-administer materials that prevent fatality due to excessive blood loss.
With a gelling agent commonly used in preparing pastries, researchers from the Inspired Nanomaterials and Tissue Engineering Laboratory have successfully fabricated an injectable bandage to stop bleeding and promote wound healing.
In a recent article “Nanoengineered Injectable Hydrogels for Wound Healing Application” published in Acta Biomaterialia, Dr. Akhilesh K. Gaharwar, assistant professor in the Department of Biomedical Engineering at Texas A&M University, uses kappa-carrageenan and nanosilicates to form injectable hydrogels to promote hemostasis (the process to stop bleeding) and facilitate wound healing via a controlled release of therapeutics.
“Injectable hydrogels are promising materials for achieving hemostasis in case of internal injuries and bleeding, as these biomaterials can be introduced into a wound site using minimally invasive approaches,” said Gaharwar. “An ideal injectable bandage should solidify after injection in the wound area and promote a natural clotting cascade. In addition, the injectable bandage should initiate wound healing response after achieving hemostasis.”
The study uses a commonly used thickening agent known as kappa-carrageenan, obtained from seaweed, to design injectable hydrogels. Hydrogels are a 3-D water swollen polymer network, similar to Jell-O, simulating the structure of human tissues.
When kappa-carrageenan is mixed with clay-based nanoparticles, injectable gelatin is obtained. The charged characteristics of clay-based nanoparticles provide hemostatic ability to the hydrogels. Specifically, plasma protein and platelets form blood adsorption on the gel surface and trigger a blood clotting cascade.
“Interestingly, we also found that these injectable bandages can show a prolonged release of therapeutics that can be used to heal the wound” said Giriraj Lokhande, a graduate student in Gaharwar’s lab and first author of the paper. “The negative surface charge of nanoparticles enabled electrostatic interactions with therapeutics thus resulting in the slow release of therapeutics.”
Nanoparticles that promote blood clotting and wound healing (red discs), attached to the wound-filling hydrogel component (black) form a nanocomposite hydrogel. The gel is designed to be self-administered to stop bleeding and begin wound-healing in emergency situations. Credit: Lokhande, et al. 1
It’s been an interesting week for hydrogels. On May 21, 2018 there was a news item on ScienceDaily about a bioengineered hydrogel which stimulated brain tissue growth after a stroke (mouse model),
In a first-of-its-kind finding, a new stroke-healing gel helped regrow neurons and blood vessels in mice with stroke-damaged brains, UCLA researchers report in the May 21 issue of Nature Materials.
“We tested this in laboratory mice to determine if it would repair the brain in a model of stroke, and lead to recovery,” said Dr. S. Thomas Carmichael, Professor and Chair of neurology at UCLA. “This study indicated that new brain tissue can be regenerated in what was previously just an inactive brain scar after stroke.”
The brain has a limited capacity for recovery after stroke and other diseases. Unlike some other organs in the body, such as the liver or skin, the brain does not regenerate new connections, blood vessels or new tissue structures. Tissue that dies in the brain from stroke is absorbed, leaving a cavity, devoid of blood vessels, neurons or axons, the thin nerve fibers that project from neurons.
After 16 weeks, stroke cavities in mice contained regenerated brain tissue, including new neural networks — a result that had not been seen before. The mice with new neurons showed improved motor behavior, though the exact mechanism wasn’t clear.
While my main interest is the group’s temporary art gallery, I am providing a brief explanatory introduction and a couple of previews for SIGGRAPH 2018.
For anyone unfamiliar with the Special Interest Group on Computer GRAPHics and Interactive Techniques (SIGGRAPH) and its conferences, from the SIGGRAPH Wikipedia entry Note: Links have been removed),
Some highlights of the conference are its Animation Theater and Electronic Theater presentations, where recently created CG films are played. There is a large exhibition floor, where several hundred companies set up elaborate booths and compete for attention and recruits. Most of the companies are in the engineering, graphics, motion picture, or video game industries. There are also many booths for schools which specialize in computer graphics or interactivity.
Dozens of research papers are presented each year, and SIGGRAPH is widely considered the most prestigious forum for the publication of computer graphics research. The recent paper acceptance rate for SIGGRAPH has been less than 26%. The submitted papers are peer-reviewed in a single-blind process. There has been some criticism about the preference of SIGGRAPH paper reviewers for novel results rather than useful incremental progress. …
This is the third SIGGRAPH Vancouver has hosted; the others were in 2011 and 2014. The theme for the 2018 iteration is ‘Generations’; here’s more about it from an Aug. 2, 2018 article by Terry Flores for Variety,
While its focus is firmly forward thinking, SIGGRAPH 2018, the computer graphics, animation, virtual reality, games, digital art, mixed reality, and emerging technologies conference, is also tipping its hat to the past thanks to its theme this year: Generations. The conference runs Aug. 12-16 in Vancouver, B.C.
“In the literal people sense, pioneers in the computer graphics industry are standing shoulder to shoulder with researchers, practitioners and the future of the industry — young people — mentoring them, dabbling across multiple disciplines to innovate, relate, and grow,” says SIGGRAPH 2018 conference chair Roy C. Anthony, VP of creative development and operations at software and technology firm Ventuz. “This is really what SIGGRAPH has always been about. Generations really seemed like a very appropriate way of looking back and remembering where we all came from and how far we’ve come.”
SIGGRAPH 2018 has a number of treats in store for attendees, including the debut of Disney’s first VR film, the short “Cycles”; production sessions on the making of “Blade Runner 2049,” “Game of Thrones,” “Incredibles 2” and “Avengers: Infinity War”; as well as sneak peeks of Disney’s upcoming “Ralph Breaks the Internet: Wreck-It Ralph 2” and Laika’s “Missing Link.”
That list of ‘treats’ in the last paragraph makes the conference seem more like an iteration of a ‘comic-con’ than a technology conference.
CHICAGO–In the burgeoning world of virtual reality (VR) technology, it remains a challenge to provide users with a realistic perception of infinite space and natural walking capabilities in the virtual environment. A team of computer scientists has introduced a new approach to address this problem by leveraging a natural human phenomenon: eye blinks.
All humans are functionally blind for about 10 percent of the time under normal circumstances due to eye blinks and saccades, a rapid movement of the eye between two points or objects. Eye blinks are a common and natural cause of so-called “change blindness,” which indicates the inability for humans to notice changes to visual scenes. Zeroing in on eye blinks and change blindness, the team has devised a novel computational system that effectively redirects the user in the virtual environment during these natural instances, all with undetectable camera movements to deliver orientation redirection.
“Previous RDW [redirected walking] techniques apply rotations continuously while the user is walking. But the amount of unnoticeable rotations is limited,” notes Eike Langbehn, lead author of the research and doctoral candidate at the University of Hamburg. “That’s why an orthogonal approach is needed–we add some additional rotations when the user is not focused on the visuals. When we learned that humans are functionally blind for some time due to blinks, we thought, ‘Why don’t we do the redirection during eye blinks?'”
Human eye blinks occur approximately 10 to 20 times per minute, about every 4 to 19 seconds. Leveraging this window of opportunity–where humans are unable to detect major motion changes while in a virtual environment–the researchers devised an approach to synchronize a computer graphics rendering system with this visual process, and introduce any useful motion changes in virtual scenes to enhance users’ overall VR experience.
The researchers’ experiments revealed that imperceptible camera rotations of 2 to 5 degrees and translations of 4 to 9 cm of the user’s viewpoint are possible during a blink without users even noticing. They tracked test participants’ eye blinks by an eye tracker in a VR head-mounted display. In a confirmatory study, the team validated that participants could not reliably detect in which of two eye blinks their viewpoint was manipulated while walking a VR curved path. The tests relied on unconscious natural eye blinking, but the researchers say redirection during blinking could be carried out consciously. Since users can consciously blink multiple times a day without much effort, eye blinks provide great potential to be used as an intentional trigger in their approach.
The team will present their work at SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia. The annual conference and exhibition showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
“RDW is a big challenge since current techniques still need too much space to enable unlimited walking in VR,” notes Langbehn. “Our work might contribute to a reduction of space since we found out that unnoticeable rotations of up to five degrees are possible during blinks. This means we can improve the performance of RDW by approximately 50 percent.”
The team’s results could be used in combination with other VR research, such as novel steering algorithms, improved path prediction, and rotations during saccades, to name a few. Down the road, such techniques could some day enable consumer VR users to virtually walk beyond their living room.
Langbehn collaborated on the work with Frank Steinicke of University of Hamburg, Markus Lappe of University of Muenster, Gregory F. Welch of University of Central Florida, and Gerd Bruder, also of University of Central Florida. For the full paper and video, visit the team’s project page.
About ACM, ACM SIGGRAPH, and SIGGRAPH 2018
ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community for members in research, technology, and applications in computer graphics and interactive techniques. SIGGRAPH is the world’s leading annual interdisciplinary educational experience showcasing the latest in computer graphics and interactive techniques. SIGGRAPH 2018, marking the 45th annual conference hosted by ACM SIGGRAPH, will take place from 12-16 August at the Vancouver Convention Centre in Vancouver, B.C.
They have provided an image illustrating what they mean (I don’t find it especially informative),
Caption: The viewing behavior of a virtual reality user, including fixations (in green) and saccades (in red). A blink fully suppresses visual perception. Credit: Eike Langbehn
Walt Disney Animation Studios will debut its first ever virtual reality short film at SIGGRAPH 2018, and the hope is viewers will walk away feeling connected to the characters as equally as they will with the VR technology involved in making the film.
Cycles, an experimental film directed by Jeff Gipson, centers around the true meaning of creating a home and the life it holds inside its walls. The idea for the film is personal, inspired by Gipson’s childhood spending time with his grandparents and creating memories in their home, and later, having to move them to an assisted living residence.
“Every house has a story unique to the people, the characters who live there,” says Gipson. “We wanted to create a story in this single place and be able to have the viewer witness life happening around them. It is an emotionally driven film, expressing the real ups and downs, the happy and sad moments in life.”
For Cycles, Gipson also drew from his past life as an architect, having spent several years designing skate parks, and from his passion for action sports, including freestyle BMX. In Los Angeles, where Gipson lives, it is not unusual to find homes with an empty swimming pool reserved for skating or freestyle biking. Part of the pitch for Cycles came out of Gipson’s experience riding in these empty pools and being curious about the homes attached to them, the families who lived there, and the memories they made.
SIGGRAPH attendees will have the opportunity to experience Cycles at the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; the Vrcade, a space for VR, AR, and MR games or experiences; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
The production team completed Cycles in four months with about 50 collaborators as part of a professional development program at the studio. A key difference in VR filmmaking includes getting creative with how to translate a story to the VR “screen.” Pre-visualizing the narrative, for one, was a challenge. Rather than traditional storyboarding, Gipson and his team instead used a mix of Quill VR painting techniques and motion capture to “storyboard” Cycles, incorporating painters and artists to generate sculptures or 3D models of characters early on and draw scenes for the VR space. The creators also got innovative with the use of light and color saturation in scenes to help guide the user’s eyes during the film.
“What’s cool for VR is that we are really on the edge of trying to figure out what it is and how to tell stories in this new medium,” says Gipson. “In VR, you can look anywhere and really be transported to a different world, experience it from different angles, and see every detail. We want people watching to feel alive and feel emotion, and give them a true cinematic experience.”
This is Gipson’s VR directorial debut. He joined Walt Disney Animation Studios in 2013, serving as a lighting artist on Disney favorites like Frozen, Zootopia, and Moana. Of getting to direct the studio’s first VR short, he says, “VR is an amazing technology and a lot of times the technology is what is really celebrated. We hope more and more people begin to see the emotional weight of VR films, and with Cycles in particular, we hope they will feel the emotions we aimed to convey with our story.”
Apparently this is a still from the ‘short’,
Caption: Disney Animation Studios will present ‘Cycles’ , its first virtual reality (VR) short, at ACM SIGGRAPH 2018. Credit: Disney Animation Studios
Google has unveiled a new virtual reality (VR) immersive experience based on a novel system that captures and renders high-quality, realistic images from the real world using light fields. Created by a team of leading researchers at Google, Welcome to Light Fields is the tech giant’s splash into the nascent arena of light fields VR experiences, an exciting corner of VR video technology gaining traction for its promise to deliver extremely high-quality imagery and experiences in the virtual world.
Google released Welcome to Light Fields earlier this year as a free app on Steam VR for HTC Vive, Oculus Rift, and Windows Mixed Reality headsets. The creators will demonstrate the VR experience at SIGGRAPH 2018, in the Immersive Pavilion, a new space for this year’s conference. The Pavilion is devoted exclusively to virtual, augmented, and mixed reality and will contain: the Vrcade, a space for VR, AR, and MR games or experiences; the VR Theater, a storytelling extravaganza that is part of the Computer Animation Festival; and the well-known Village, for showcasing large-scale projects. SIGGRAPH 2018, held 12-16 August in Vancouver, British Columbia, is an annual gathering that showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.
Destinations in Welcome to Light Fields include NASA’s Space Shuttle Discovery, delivering to viewers an astronaut’s view inside the flight deck, which has never been open to the public; the pristine teak and mahogany interiors of the Gamble House, an architectural treasure in Pasadena, CA; and the glorious St. Stephen’s Church in Granada Hills, CA, home to a stunning wall of more than 14,000 pieces of glimmering stained glass.
“I love that light fields in VR can teleport you to exotic places in the real world, and truly make you believe you are there,” says Ryan Overbeck, software engineer at Google who co-led the project. “To me, this is magic.”
To bring this experience to life, Overbeck worked with a team that included Paul Debevec, senior staff engineer at Google, who managed the project and led the hardware piece with engineers Xueming Yu, Jay Busch, and Graham Fyffe. With Overbeck, Daniel Erickson and Daniel Evangelakos focused on the software end. The researchers designed a comprehensive system for capturing and rendering high-quality, spherical light field still images from footage captured in the real world. They developed two easy-to-use light field camera rigs, based on the GoPro Hero4action sports camera, that efficiently capture thousands of images on the surface of a sphere. Those images were then passed through a cloud-based light-field-processing pipeline.
Among other things, explains Overbeck, “The processing pipeline uses computer vision to place the images in 3D and generate depth maps, and we use a modified version of our vp9 video codec
to compress the light field data down to a manageable size.” To render a light field dataset, he notes, the team used a rendering algorithm that blends between the thousands of light field images in real-time.
The team relied on Google’s talented pool of engineers in computer vision, graphics, video compression, and machine learning to overcome the unique challenges posed in light fields technology. They also collaborated closely with the WebM team (who make the vp9 video codec) to develop the high-quality light field compression format incorporated into their system, and leaned heavily on the expertise of the Jump VR team to help pose the images and generate depth maps. (Jump is Google’s professional VR system for achieving 3D-360 video production at scale.)
Indeed, with Welcome to Light Fields, the Google team is demonstrating the potential and promise of light field VR technology, showcasing the technology’s ability to provide a truly immersive experience with a level of unmatched realism. Though light fields technology has been researched and explored in computer graphics for more than 30 years, practical systems for actually delivering high-quality light field experiences has not yet been possible.
Part of the team’s motivation behind creating this VR light field experience is to invigorate the nascent field.
“Welcome to Light Fields proves that it is now possible to make a compelling light field VR viewer that runs on consumer-grade hardware, and we hope that this knowledge will encourage others to get involved with building light field technology and media,” says Overbeck. “We understand that in order to eventually make compelling consumer products based on light fields, we need a thriving light field ecosystem. We need open light field codecs, we need artists creating beautiful light field imagery, and we need people using VR in order to engage with light fields.”
I don’t really understand why this image, which looks like something belongs on advertising material, would be chosen to accompany a news release on a science-based distribution outlet,
Caption: A team of leading researchers at Google, will unveil the new immersive virtual reality (VR) experience “Welcome to Lightfields” at ACM SIGGRAPH 2018. Credit: Image courtesy of Google/Overbeck
Advances in computer-generated imagery have brought vivid, realistic animations to life, but the sounds associated with what we see simulated on screen, such as two objects colliding, are often recordings. Now researchers at Stanford University have developed a system that automatically renders accurate sounds for a wide variety of animated phenomena.
“There’s been a Holy Grail in computing of being able to simulate reality for humans. We can animate scenes and render them visually with physics and computer graphics, but, as for sounds, they are usually made up,” said Doug James, professor of computer science at Stanford University. “Currently there exists no way to generate realistic synchronized sounds for complex animated content, such as splashing water or colliding objects, automatically. This fills that void.”
The researchers will present their work on this sound synthesis system as part of ACM SIGGRAPH 2018, the leading conference on computer graphics and interactive techniques. In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating, the researchers said.
“I’ve spent years trying to solve partial differential equations – which govern how sound propagates – by hand,” said Jui-Hsien Wang, a graduate student in James’ lab and in the Institute for Computational and Mathematical Engineering (ICME), and lead author of the paper. “This is actually a place where you don’t just solve the equation but you can actually hear it once you’ve done it. That’s really exciting to me and it’s fun.”
Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves. It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics. So, although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.
Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen. These clips are also restricted to noises that exist – they can’t predict anything new. Other systems that produce and predict sounds as accurate as those of James and his team work only in special cases, or assume the geometry doesn’t deform very much. They also require a long pre-computation phase for each separate object.
“Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation,” said Ante Qu, a graduate student in James’ lab and co-author of the paper.
The simulated sound that results from this method is highly detailed. It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.
In its current form, the group’s process takes a while to create the finished product. But, now that they have proven this technique’s potential, they can focus on performance optimizations, such as implementing their method on parallel GPU hardware, that should make it drastically faster.
And, even in its current state, the results are worth the wait.
“The first water sounds we generated with the system were among the best ones we had simulated – and water is a huge challenge in computer-generated sound,” said James. “We thought we might get a little improvement, but it is dramatically better than previous approaches even right out of the box. It was really striking.”
Although the group’s work has faithfully rendered sounds of various objects spinning, falling and banging into each other, more complex objects and interactions – like the reverberating tones of a Stradivarius violin – remain difficult to model realistically. That, the group said, will have to wait for a future solution.
Timothy Langlois of Adobe Research is a co-author of this paper. This research was funded by the National Science Foundation and Adobe Research. James is also a professor, by courtesy, of music and a member of Stanford Bio-X.
Researchers Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang have created a video featuring highlights of animations with sounds synthesized using the Stanford researchers’ new system.,
The researchers have also provided this image,
By computing pressure waves cast off by rapidly moving and vibrating surfaces – such as a cymbal – a new sound synthesis system developed by Stanford researchers can automatically render realistic sound for computer animations. (Image credit: Timothy Langlois, Doug L. James, Ante Qu and Jui-Hsien Wang)
It does seem like we’re synthesizing the world around us, eh?
SIGGRAPH 2018, the world’s leading showcase of digital art created using computer graphics and interactive techniques, will present a special Art Gallery, entitled “Origins,” and historic Art Papers in Vancouver, B.C. The 45th SIGGRAPH conference will take place 12–16 August at the Vancouver Convention Centre. The programs will also honor the generations of creators that have come before through a special, 50th anniversary edition of the Leonard journal. To register for the conference, visit S2018.SIGGRAPH.ORG.
The SIGGRAPH 2018 ART GALLERY is a curated exhibition, conceived as a dialogical space that enables the viewer to reflect on man’s diverse cultural values and rituals through contemporary creative practices. Building upon an exciting and eclectic selection of creative practices mediated through technologies that represent the sophistication of our times, the SIGGRAPH 2018 Art Gallery will embrace the narratives of the indigenous communities based near Vancouver and throughout Canada as a source of inspiration. The exhibition will feature contemporary media artworks, art pieces by indigenous communities, and other traces of technologically mediated Ludic practices.
Andrés Burbano, SIGGRAPH 2018 Art Gallery chair and professor at Universidad de los Andes, said, “The Art Gallery aims to articulate myth and technology, science and art, the deep past and the computational present, and will coalesce around a theme of ‘Origins.’ Media and technological creative expressions will explore principles such as the origins of the cosmos, the origins of life, the origins of human presence, the origins of the occupation of territories in the Americas, and the origins of people living in the vast territories of the Arctic.”
He continued, “The venue [in Vancouver] hopes to rekindle the original spark that ignited the collaborative spirit of the SIGGRAPH community of engineers, scientists, and artists, who came together to create the very first conference in the early 1970s.”
Highlights from the 2018 Art Gallery include:
Transformation Mask (Canada) [Technology Based]
Shawn Hunt, independent; and Microsoft Garage: Andy Klein, Robert Butterworth, Jonathan Cobb, Jeremy Kersey, Stacey Mulcahy, Brendan O’Rourke, Brent Silk, and Julia Taylor-Hell, Microsoft Vancouver
TRANSFORMATION MASK is an interactive installation that features the Microsoft HoloLens. It utilizes electronics and mechanical engineering to express a physical and digital transformation. Participants are immersed in spatial sounds and holographic visuals.
Somnium (U.S.) [Science Based]
Marko Peljhan, Danny Bazo, and Karl Yerkes, University of California, Santa Barbara
Somnium is a cybernetic installation that provides visitors with the ability to sensorily, cognitively, and emotionally contemplate and experience exoplanetary discoveries, their macro and micro dimensions, and the potential for life in our Galaxy. Some might call it “space telescope.”
Ernest Edmonds Retrospective – Art Systems 1968-2018 (United Kingdom) [History Based]
Ernest Edmonds, De Montfort University
Celebrating one of the pioneers of computer graphics-based art since the early 1970s, this Ernest Edmonds career retrospective will showcase snapshots of Edmonds’ work as it developed over the years. With one piece from each decade, the retrospective will also demonstrate how vital the Leonardo journal has been throughout the 50-year journey.
In addition to the works above, the Art Gallery will feature pieces from notable female artists Ozge Samanci, Ruth West, and Nicole L’Hullier. For more information about the Edmonds retrospective, read THIS POST ON THE ACM SIGGRAPH BLOG.
The SIGGRAPH 2018 ART PAPERS program is designed to feature research from artists, scientists, theorists, technologists, historians, and more in one of four categories: project description, theory/criticism, methods, or history. The chosen work was selected by an international jury of scholars, artists, and immersive technology developers.
To celebrate the 50th anniversary of LEONARDO (MIT Press), and 10 years of its annual SIGGRAPH issue, SIGGRAPH 2018 is pleased to announce a special anniversary edition of the journal, which will feature the 2018 art papers. For 50 years, Leonardo has been the definitive publication for artist-academics. To learn more about the relationship between SIGGRAPH and the journal, listen to THIS EPISODE OF THE SIGGRAPH SPOTLIGHT PODCAST.
“In order to encourage a wider range of topics, we introduced a new submission type, short papers. This enabled us to accept more content than in previous years. Additionally, for the first time, we will introduce sessions that integrate the Art Gallery artist talks with Art Papers talks, promoting richer connections between these two creative communities,” said Angus Forbes, SIGGRAPH 2018 Art Papers chair and professor at University of California, Santa Cruz.
Art Papers highlights include:
Alienating the Familiar with CGI: A Recipe for Making a Full CGI Art House Animated Feature [Long]
Alex Counsell and Paul Charisse, University of Portsmouth
This paper explores the process of making and funding an art house feature film using full CGI in a marketplace where this has never been attempted. It explores cutting-edge technology and production approaches, as well as routes to successful fundraising.
Augmented Fauna and Glass Mutations: A Dialogue Between Material and Technique in Glassblowing and 3D Printing [Long]
Tobias Klein, City University of Hong Kong
The two presented artworks, “Augmented Fauna” and “Glass Mutations,” were created during an artist residence at the PILCHUCK GLASS SCHOOL. They are examples of the qualities and methods established through a synthesis between digital workflows and traditional craft processes and thus formulate the notion of digital craftsmanship.
Inhabitat: An Imaginary Ecosystem in a Children’s Science Museum [Short]
Graham Wakefield, York University, and Haru Hyunkyung Ji, OCAD University
“Inhabitat” is a mixed reality artwork in which participants become part of an imaginary ecology through three simultaneous perspectives of scale and agency; three distinct ways to see with other eyes. This imaginary world was exhibited at a children’s science museum for five months, using an interactive projection-augmented sculpture, a large screen and speaker array, and a virtual reality head-mounted display.
What’s the what?
My father used to say that and I always assumed it meant summarize the high points, if you need to, and get to the point—fast. In that spirit, I am both fascinated and mildly appalled. The virtual, mixed, and augmented reality technologies, as well as, the others being featured at SIGGRAPH 2018 are wondrous in many ways but it seems we are coming ever closer to a world where we no longer interact with nature or other humans directly. (see my August 10, 2018 posting about the ‘extinction of experience’ for research that encourages more direct interaction with nature) I realize that SIGGRAPH is intended as a primarily technical experience but I think a little more content questioning these technologies and their applications (social implications) might be in order. That’s often the artist’s role but I can’t see anything in the art gallery descriptions that hint at any sort of fundamental critique.