Author Archives: Maryse de la Giroday

AI (Audeo) uses visual cues to play the right music

A February 4, 2021 news item on ScienceDaily highlights research from the University of Washington (state) about artificial intelligence, piano playing, and Audeo,

Anyone who’s been to a concert knows that something magical happens between the performers and their instruments. It transforms music from being just “notes on a page” to a satisfying experience.

A University of Washington team wondered if artificial intelligence could recreate that delight using only visual cues — a silent, top-down video of someone playing the piano. The researchers used machine learning to create a system, called Audeo, that creates audio from silent piano performances. When the group tested the music Audeo created with music-recognition apps, such as SoundHound, the apps correctly identified the piece Audeo played about 86% of the time. For comparison, these apps identified the piece in the audio tracks from the source videos 93% of the time.

The researchers presented Audeo Dec. 8 [2020] at the NeurIPS 2020 conference.

A February 4, 2021 University of Washington news release (also on EurekAlert), which originated the news item, offers more detail,

“To create music that sounds like it could be played in a musical performance was previously believed to be impossible,” said senior author Eli Shlizerman, an assistant professor in both the applied mathematics and the electrical and computer engineering departments. “An algorithm needs to figure out the cues, or ‘features,’ in the video frames that are related to generating music, and it needs to ‘imagine’ the sound that’s happening in between the video frames. It requires a system that is both precise and imaginative. The fact that we achieved music that sounded pretty good was a surprise.”

Audeo uses a series of steps to decode what’s happening in the video and then translate it into music. First, it has to detect which keys are pressed in each video frame to create a diagram over time. Then it needs to translate that diagram into something that a music synthesizer would actually recognize as a sound a piano would make. This second step cleans up the data and adds in more information, such as how strongly each key is pressed and for how long.

“If we attempt to synthesize music from the first step alone, we would find the quality of the music to be unsatisfactory,” Shlizerman said. “The second step is like how a teacher goes over a student composer’s music and helps enhance it.”

The researchers trained and tested the system using YouTube videos of the pianist Paul Barton. The training consisted of about 172,000 video frames of Barton playing music from well-known classical composers, such as Bach and Mozart. Then they tested Audeo with almost 19,000 frames of Barton playing different music from these composers and others, such as Scott Joplin.

Once Audeo has generated a transcript of the music, it’s time to give it to a synthesizer that can translate it into sound. Every synthesizer will make the music sound a little different — this is similar to changing the “instrument” setting on an electric keyboard. For this study, the researchers used two different synthesizers.

“Fluidsynth makes synthesizer piano sounds that we are familiar with. These are somewhat mechanical-sounding but pretty accurate,” Shlizerman said. “We also used PerfNet, a new AI synthesizer that generates richer and more expressive music. But it also generates more noise.”

Audeo was trained and tested only on Paul Barton’s piano videos. Future research is needed to see how well it could transcribe music for any musician or piano, Shlizerman said.

“The goal of this study was to see if artificial intelligence could generate music that was played by a pianist in a video recording — though we were not aiming to replicate Paul Barton because he is such a virtuoso,” Shlizerman said. “We hope that our study enables novel ways to interact with music. For example, one future application is that Audeo can be extended to a virtual piano with a camera recording just a person’s hands. Also, by placing a camera on top of a real piano, Audeo could potentially assist in new ways of teaching students how to play.”

The researchers have created videos featuring the live pianist and the AI pianist, which you will find embedded in the February 4, 2021 University of Washington news release.

Here’s a link to and a citation for the researchers’ paper,

Audeo: Generating music just from a video of pianist movements by Kun Su, Xiulong Liu, and E. Shlizerman. http://faculty.washington.edu/shlizee/audeo/?_ga=2.11972724.1912597934.1613414721-714686724.1612482256 (I had some difficulty creating a link and ended up with this unwieldy open access (?) version.)

The paper also appears in the proceedings for Advances in Neural Information Processing Systems 33 (NeurIPS 2020) Edited by: H. Larochelle and M. Ranzato and R. Hadsell and M.F. Balcan and H. Lin. I had to scroll through many papers and all I found for ‘Audeo’ was an abstract.

Removing vandals’ graffiti from street art with nanotechnology-enabled method and Happy Italian Research in the World Day and more …

Happy Italian Research in the World Day! Each year since 2018 this has been celebrated on the day that Leonardo da Vinci was born over 500 years ago on April 15. It’s also the start of World Creativity and Innovation Week (WCIW), April 15 – 21, 2021 with over 80 countries (Italy, The Gambia, Mauritius, Belarus, Iceland, US, Syria, Vietnam, Indonesia, Denmark, etc.) celebrating. By the way, April 21, 2021 is the United Nations’ World Creativity and Innovation Day. Now, onto some of the latest research, coming from Italy, on art conservation.

There’s graffiti and there’s graffiti as Michele Baglioni points out in an April 13, 2021 American Chemical Society (ACS) press conference (Rescuing street art from vandals’ graffiti) held during the ACS Spring 2021 Meeting being held online April 5-30, 2021.

An April 13, 2021 news item on ScienceDaily announced the research,

From Los Angeles and the Lower East Side of New York City to Paris and Penang, street art by famous and not-so-famous artists adorns highways, roads and alleys. In addition to creating social statements, works of beauty and tourist attractions, street art sometimes attracts vandals who add their unwanted graffiti, which is hard to remove without destroying the underlying painting. Now, researchers report novel, environmentally friendly techniques that quickly and safely remove over-paintings on street art.

A new eco-friendly method can remove the graffiti that this person is about to spray on the street art behind them. Credit: FOTOKITA/Shutterstock.com

An April 13, 2021 ACS news release (also on EurekAlert), which originated the news item, provides details about this latest work and how it fits into the field of art conservation,

“For decades, we have focused on cleaning or restoring classical artworks that used paints designed to last centuries,” says Piero Baglioni, Ph.D., the project’s principal investigator. “In contrast, modern art and street art, as well as the coatings and graffiti applied on top, use materials that were never intended to stand the test of time.”

Research fellow Michele Baglioni, Ph.D., (no relation to Piero Baglioni) and coworkers built on their colleagues’ work and designed a nanostructured fluid, based on nontoxic solvents and surfactants, loaded in highly retentive hydrogels that very slowly release cleaning agents to just the top layer — a few microns in depth. The undesired top layer is removed in seconds to minutes, with no damage or alteration to the original painting.

Street art and overlying graffiti usually contain one or more of three classes of paint binders — acrylic, vinyl or alkyd polymers. Because these paints are similar in composition, removing the top layer frequently damages the underlying layer. Until now, the only way to remove unwanted graffiti was by using chemical cleaners or mechanical action such as scraping or sand blasting. These traditional methods are hard to control and often damaged the original art.

“We have to know exactly what is going on at the surface of the paintings if we want to design cleaners,” explains Michele Baglioni, who is at the University of Florence (Italy). “In some respects, the chemistry is simple — we are using known surfactants, solvents and polymers. The challenge is combining them in the right way to get all the properties we need.”

Michele Baglioni and coworkers used Fourier transform infrared spectroscopy to characterize the binders, fillers and pigments in the three classes of paints. After screening for suitable low-toxicity, “green” solvents and biodegradable surfactants, he used small angle X-ray scattering analyses to study the behavior of four alkyl carbonate solvents and a biodegradable nonionic surfactant in water.

The final step was formulating the nanostructured cleaning combination. The system that worked well also included 2-butanol and a readily biodegradable alkyl glycoside hydrotrope as co-solvents/co-surfactants. Hydrotropes are water-soluble, surface-active compounds used at low levels that allow more concentrated formulations of surfactants to be developed. The system was then loaded into highly retentive hydrogels and tested for its ability to remove overpaintings on laboratory mockups using selected paints in all possible combinations.

After dozens of tests, which helped determine how long the gel should be applied and removed without damaging the underlying painting, he tested the gels on a real piece of street art in Florence, successfully removing graffiti without affecting the original work.

“This is the first systematic study on the selective and controlled removal of modern paints from paints with similar chemical composition,” Michele Baglioni says. The hydrogels can also be used for the removal of top coatings on modern art that were originally intended to preserve the paintings but have turned out to be damaging. The hydrogels will become available commercially from CSGI Solutions for Conservation of Cultural Heritage, a company founded by Piero Baglioni and others. CSGI, the Center for Colloid and Surface Science, is a university consortium mainly funded through programs of the European Union.

And, there was this after the end of the news release,

The researchers acknowledge support and funding from the European Union NANORESTART (Nanomaterials for the Restoration of Works of Art) Program [or NanoRestArt] and CSGI.

The NanoRestArt project has been mentioned here a number of times,

The project ended in November 2018 but the NanoRestArt website can still be accessed.

A 3D spider web, a VR (virtual reality) setup, and sonification (music)

Markus Buehler and his musical spider webs are making news again.

Caption: Cross-sectional images (shown in different colors) of a spider web were combined into this 3D image and translated into music. Credit: Isabelle Su and Markus Buehler

The image (so pretty) you see in the above comes from a Markus Buehler presentation that was made at the American Chemical Society (ACS) meeting. ACS Spring 2021 being held online April 5-30, 2021. The image was also shown during a press conference which the ACS has made available for public viewing. More about that later in this posting.

The ACS issued an April 12, 2021 news release (also on EurekAlert), which provides details about Buehler’s latest work on spider webs and music,

Spiders are master builders, expertly weaving strands of silk into intricate 3D webs that serve as the spider’s home and hunting ground. If humans could enter the spider’s world, they could learn about web construction, arachnid behavior and more. Today, scientists report that they have translated the structure of a web into music, which could have applications ranging from better 3D printers to cross-species communication and otherworldly musical compositions.

The researchers will present their results today at the spring meeting of the American Chemical Society (ACS). ACS Spring 2021 is being held online April 5-30 [2021]. Live sessions will be hosted April 5-16, and on-demand and networking content will continue through April 30 [2021]. The meeting features nearly 9,000 presentations on a wide range of science topics.

“The spider lives in an environment of vibrating strings,” says Markus Buehler, Ph.D., the project’s principal investigator, who is presenting the work. “They don’t see very well, so they sense their world through vibrations, which have different frequencies.” Such vibrations occur, for example, when the spider stretches a silk strand during construction, or when the wind or a trapped fly moves the web.

Buehler, who has long been interested in music, wondered if he could extract rhythms and melodies of non-human origin from natural materials, such as spider webs. “Webs could be a new source for musical inspiration that is very different from the usual human experience,” he says. In addition, by experiencing a web through hearing as well as vision, Buehler and colleagues at the Massachusetts Institute of Technology (MIT), together with collaborator Tomás Saraceno at Studio Tomás Saraceno, hoped to gain new insights into the 3D architecture and construction of webs.

With these goals in mind, the researchers scanned a natural spider web with a laser to capture 2D cross-sections and then used computer algorithms to reconstruct the web’s 3D network. The team assigned different frequencies of sound to strands of the web, creating “notes” that they combined in patterns based on the web’s 3D structure to generate melodies. The researchers then created a harp-like instrument and played the spider web music in several live performances around the world.

The team also made a virtual reality setup that allowed people to visually and audibly “enter” the web. “The virtual reality environment is really intriguing because your ears are going to pick up structural features that you might see but not immediately recognize,” Buehler says. “By hearing it and seeing it at the same time, you can really start to understand the environment the spider lives in.”

To gain insights into how spiders build webs, the researchers scanned a web during the construction process, transforming each stage into music with different sounds. “The sounds our harp-like instrument makes change during the process, reflecting the way the spider builds the web,” Buehler says. “So, we can explore the temporal sequence of how the web is being constructed in audible form.” This step-by-step knowledge of how a spider builds a web could help in devising “spider-mimicking” 3D printers that build complex microelectronics. “The spider’s way of ‘printing’ the web is remarkable because no support material is used, as is often needed in current 3D printing methods,” he says.

In other experiments, the researchers explored how the sound of a web changes as it’s exposed to different mechanical forces, such as stretching. “In the virtual reality environment, we can begin to pull the web apart, and when we do that, the tension of the strings and the sound they produce change. At some point, the strands break, and they make a snapping sound,” Buehler says.

The team is also interested in learning how to communicate with spiders in their own language. They recorded web vibrations produced when spiders performed different activities, such as building a web, communicating with other spiders or sending courtship signals. Although the frequencies sounded similar to the human ear, a machine learning algorithm correctly classified the sounds into the different activities. “Now we’re trying to generate synthetic signals to basically speak the language of the spider,” Buehler says. “If we expose them to certain patterns of rhythms or vibrations, can we affect what they do, and can we begin to communicate with them? Those are really exciting ideas.”

You can go here for the April 12, 2021 ‘Making music from spider webs’ ACS press conference’ it runs about 30 mins. and you will hear some ‘spider music’ played.

Getting back to the image and spider webs in general, we are most familiar with orb webs (in the part of Canada where I from if nowhere else), which look like spirals and are 2D. There are several other types of webs some of which are 3D, like tangle webs, also known as cobwebs, funnel webs and more. See this March 18, 2020 article “9 Types of Spider Webs: Identification + Pictures & Spiders” by Zach David on Beyond the Treat for more about spiders and their webs. If you have the time, I recommend reading it.

I’ve been following Buehler’s spider web/music work for close to ten years now; the latest previous posting is an October 23, 2019 posting where you’ll find a link to an application that makes music from proteins (spider webs are made up of proteins; scroll down about 30% of the way; it’s in the 2nd to last line of the quoted text about the embedded video).

Here is a video (2 mins. 17 secs.) of a spider web music performance that Buehler placed on YouTube,

Feb 3, 2021

Markus J. Buehler

Spider’s Canvas/Arachonodrone show excerpt at Palais de Tokyo, Paris, on November 2018. Video by MIT CAST. More videos can be found on www.arachnodrone.com. The performance was commissioned by Studio Tomás Saraceno (STS), in the context of Saraceno’s carte blanche exhibition, ON AIR. Spider’s Canvas/Arachnodrone was performed by Isabelle Su and Ian Hattwick on the spider web instrument, Evan Ziporyn on the EWI (Electronic Wind Instrument), and Christine Southworth on the guitar and EBow (Electronic Bow)

You can find more about the spider web music and Buehler’s collaborators on http://www.arachnodrone.com/,

Spider’s Canvas / Arachnodrone is inspired by the multifaceted work of artist Tomas Saraceno, specifically his work using multiple species of spiders to make sculptural webs. Different species make very different types of webs, ranging not just in size but in design and functionality. Tomas’ own web sculptures are in essence collaborations with the spiders themselves, placing them sequentially over time in the same space, so that the complex, 3-dimensional sculptural web that results is in fact built by several spiders, working together.

Meanwhile, back among the humans at MIT, Isabelle Su, a Course 1 doctoral student in civil engineering, has been focusing on analyzing the structure of single-species spider webs, specifically the ‘tent webs’ of the cyrtophora citricola, a tropical spider of particular interest to her, Tomas, and Professor Markus Buehler. Tomas gave the department a cyrtophora spider, the department gave the spider a space (a small terrarium without glass), and she in turn built a beautiful and complex web. Isabelle then scanned it in 3D and made a virtual model. At the suggestion of Evan Ziporyn and Eran Egozy, she then ported the model into Unity, a VR/game making program, where a ‘player’ can move through it in numerous ways. Evan & Christine Southworth then worked with her on ‘sonifying’ the web and turning it into an interactive virtual instrument, effectively turning the web into a 1700-string resonating instrument, based on the proportional length of each individual piece of silk and their proximity to one another. As we move through the web (currently just with a computer trackpad, but eventually in a VR environment), we create a ‘sonic biome’: complex ‘just intonation’ chords that come in and out of earshot according to which of her strings we are closest to. That part was all done in MAX/MSP, a very flexible high level audio programming environment, which was connected with the virtual environment in Unity. Our new colleague Ian Hattwick joined the team focusing on sound design and spatialization, building an interface that allowed him the sonically ‘sculpt’ the sculpture in real time, changing amplitude, resonance, and other factors. During this performance at Palais de Tokyo, Isabelle toured the web – that’s what the viewer sees – while Ian adjusted sounds, so in essence they were together “playing the web.” Isabelle provides a space (the virtual web) and a specific location within it (by driving through), which is what the viewer sees, from multiple angles, on the 3 scrims. The location has certain acoustic potentialities, and Ian occupies them sonically, just as a real human performer does in a real acoustic space. A rough analogy might be something like wandering through a gothic cathedral or a resonant cave, using your voice or an instrument at different volumes and on different pitches to find sonorous resonances, echoes, etc. Meanwhile, Evan and Christine are improvising with the web instrument, building on Ian’s sound, with Evan on EWI (Electronic Wind Instrument) and Christine on electric guitar with EBow.

For the visuals, Southworth wanted to create the illusion that the performers were actually inside the web. We built a structure covered in sharkstooth scrim, with 3 projectors projecting in and through from 3 sides. Southworth created images using her photographs of local Lexington, MA spider webs mixed with slides of the scan of the web at MIT, and then mixed those images with the projection of the game, creating an interactive replica of Saraceno’s multi-species webs.

If you listen to the press conference, you will hear Buehler talk about practical applications for this work in materials science.

Online symposium (April 27 – 28, 2021) on Canada’s first federal budget in two years

The Canadian federal budget is due to be announced/revealed on April 19, 2021—the first budget we’ve seen since 2019.

The Canadian Science Policy Centre (CSPC)is hosting an April 27 -28, 2021 symposium online and the main focus will be on science and funding. Before moving onto the symposium details, I think a quick refresher is in order.

No oversight, WE Charity scandal

While the Liberal government has done much which is laudable by supporting people and businesses through this worldwide COVID-19 pandemic, there have been at least two notable missteps with regard to fiscal responsibility. This March 24, 2020 article in The Abbotsford News outlines the problem,

Conservative Finance critic Pierre Poilievre says there’s no deal yet between the Liberal government and Opposition over a proposed emergency aid bill to spend billions of dollars to fight the COVID-19 pandemic and cushion some of its damage to the economy.

The opposition parties had said they would back the $82 billion in direct spending and deferred taxes Prime Minister Justin Trudeau promised to put up to prepare the country for mass illness and help Canadians cope with lost jobs and wages.

Yet a draft of the bill circulated Monday suggested it was going to give cabinet, not MPs, extraordinary power over taxes and spending, so ministers could act without Parliament’s approval for months.

The Conservatives will support every one of the aid measures contained in bill with no debate, Poilievre said. The only issue is whether the government needs to be given never before seen powers to tax and spend. [emphasis mine]

When there’s a minority government like the one Trudeau leads, the chance to bring the government down on a spending bill is what gives the opposition its power.

The government did not receive that approval in Parliament—but they tried. That was in March 2020; a few weeks later, there’s this (from the WE Charity scandal entry on Wikipedia),, Note: Links have been removed

On April 5, 2020 amidst the COVID-19 Pandemic, the Prime Minister of Canada, Justin Trudeau, and his then-Finance Minister Bill Morneau, held a telephone conversation discussing measures to financially assist the country’s student population.[14] The Finance Department was tasked with devising a series of measures to address these issues. This would begin a chain of events involving numerous governmental agencies.

Through a no-bid selection process [emphasis mine], WE Charity was chosen to administer the CSSG [Canada Student Service Grant], which would have created grants for students who volunteered during the COVID-19 pandemic.[15][13] The contract agreement was signed with WE Charity Foundation,[16] a corporation affiliated with WE Charity, on June 23, 2020. It was agreed that WE Charity, which had already begun incurring eligible expenses for the project on May 5 at their own risk,[17][18] would be paid $43.53 million[19] to administer the program; $30 million of which was paid to WE Charity Foundation on June 30, 2020.[18] This was later fully refunded.[17] A senior bureaucrat would note that “ESDC thinks that ‘WE’ might be able to be the volunteer matching third party … The mission of WE is congruent with national service and they have a massive following on social media.”[20]

Concurrent to these events, and prior to the announcement of the CSSG on June 25, 2020, WE Charity was simultaneously corresponding with the same government agencies ultimately responsible for choosing the administrator of the program.[8] WE Charity would submit numerous proposals in April, beginning on April 9, 2020, on the topic of youth volunteer award programs.[9] These were able to be reformed into what became the CSSG.[8]

On June 25, 2020 Justin Trudeau announced a series of relief measures for students. Among them was the Canada Student Service Grant program; whereby students would be eligible to receive $1000 for every 100 hours of volunteer activities, up to $5,000.[21]

The structure of the program, and the selection of WE Charity as its administrator, immediately triggered condemnation amongst the Official Opposition,[22] as well as numerous other groups, such as the Public Service Alliance of Canada,[7] Democracy Watch,[23] and Volunteer Canada[24] who argued that WE Charity:

  • Was not the only possible administrator as had been claimed
  • Had been the beneficiary of cronyism
  • Had experienced significant disruption due to the COVID-19 pandemic and required a bailout
  • Had illegally lobbied the government
  • Was unable to operate in French-speaking regions of Canada
  • Was potentially in violation of labour laws
  • Had created hundreds of volunteer positions with WE Charity itself as part of the program, doing work generally conducted by paid employees, representing a conflict of interests. …

In a July 13, 2020 article about the scandal on BBC (British Broadcasting Corporation) online, it’s noted that Trudeau was about to undergo his third ethics inquiry since first becoming Prime Minister in 2015. His first ethics inquiry took place in 2017, the second in 2019, and again in 2020.

None of this has anything to do with science funding (as far as I know) but it does set the stage for questions about how science funding is determined and who will be getting it. There are already systems in place for science funding through various agencies but the federal budget often sets special priorities such as the 2017 Pan-Canadian Artificial Intelligence Strategy with its attendant $125M. As well,Prime Minister Justin Trudeau likes to use science as a means of enhancing his appeal. See my March 16, 2018 posting for a sample of this, scroll down to the “Sunny ways: a discussion between Justin Trudeau and Bill Nye” subhead.

Federal Budget 2021 Symposium

From the CSPC’s Federal Budget 2021 Symposium event page, Note: Minor changes have been made due to my formatting skills, or lack thereof,

Keynote talk by David Watters entitled: “Canada’s Performance in R&D and Innovation Ecosystem in the Context of Health and Economic Impact of COVID-19 and Investments in the Budget“ [sic]

Tentative Event Schedule

Tuesday April 27
12:00 – 4:30 pm EDT

12:00 – 1:00 Session I: Keynote Address: The Impact of Budget 2021 on the Performance of Canada’s National R&D/Innovation Ecosystem 

David Watters, President & CEO, Global Advantage Consulting

1:15 – 1:45 Session II: Critical Analysis 

Robert Asselin, Senior Vice President, Policy, Business Council of Canada
Irene Sterian, Founder, President & CEO, REMAP (Refined Manufacturing Acceleration Process); Director, Technology & Innovation, Celestica
David Wolfe, Professor of Political Science, UTM [University of Toronto Mississauga], Innovation Policy Lab, Munk School of Global Affairs and Public Policy

2:00 – 3:00 Session III: Superclusters 

Bill Greuel, CEO, Protein Industries Canada
Kendra MacDonald, CEO, Canada’s Ocean Supercluster
Angela Mondou, President & CEO, TECHNATION
Jayson Myers, CEO, Next Generation Manufacturing Canada (NGen)

3:30 – 4:30 Session IV: Business & Industry 3:30 – 4:30

Namir Anani, President & CEO, Information and Communications Technology Council [ICTC]
Karl Blackburn, President & CEO, Conseil du patronat du Québec
Tabatha Bull, President & CEO, Canadian Council for Aboriginal Business [CCAB]
Karen Churchill, President & CEO, Ag-West Bio Inc.
Karimah Es Sabar, CEO & Partner of Quark Venture LP; Chair, Health/Biosciences Economic Strategy Table

Wednesday April 28
2:00 – 4:30 pm EDT

2:00 – 3:00 Session V: Universities and Colleges

Steven Liss, Vice-President, Research and Innovation & Professor of Chemistry and Biology, Faculty of Science, Ryerson University
Madison Rilling, Project Manager, Optonique, Québec’s Optics & Photonics Cluster; Youth Council Member, Office of the Chief Science Advisor of Canada

3:30 – 4:30 Session VI: Non-Governmental Organizations 

Genesa M. Greening, President & CEO, BC Women’s Health Foundation
Maya Roy, CEO, YWCA Canada
Gisèle Yasmeen, Executive Director, Food Secure Canada
Jayson Myers, CEO, Next Generation Manufacturing Canada (NGen)

Register Here

Enjoy!

PS: I expect the guests at the Canadian Science Policy Centre’s (CSPC) April 27 – 28, 2021 Federal Budget Symposium to offer at least some commentary that boils down to ‘we love getting more money’ or ‘we’re not getting enough money’ or a bit of both.

I also expect the usual moaning over our failure to support industrial research and/or home grown companies E.g., Element AI (Canadian artificial intelligence company formerly headquartered in Montréal) was sold to a US company in November 2020 (see the Wikipedia entry). The US company doesn’t seem to have kept any of the employees but it seems to have acquired the intellectual property.

Supercomputing capability at home with Graphical Processing Units (GPUs)

Researchers at the University of Sussex (in the UK) have found a way to make your personal computer as powerful as a supercomputer according to a February 2, 2021 University of Sussex press release (also on EurekAlert),

University of Sussex academics have established a method of turbocharging desktop PCs to give them the same capability as supercomputers worth tens of millions of pounds.

Dr James Knight and Prof Thomas Nowotny from the University of Sussex’s School of Engineering and Informatics used the latest Graphical Processing Units (GPUs) to give a single desktop PC the capacity to simulate brain models of almost unlimited size.

The researchers believe the innovation, detailed in Nature Computational Science, will make it possible for many more researchers around the world to carry out research on large-scale brain simulation, including the investigation of neurological disorders.

Currently, the cost of supercomputers is so prohibitive they are only affordable to very large institutions and government agencies and so are not accessible for large numbers of researchers.

As well as shaving tens of millions of pounds off the costs of a supercomputer, the simulations run on the desktop PC require approximately 10 times less energy bringing a significant sustainability benefit too.

Dr Knight, Research Fellow in Computer Science at the University of Sussex, said: “I think the main benefit of our research is one of accessibility. Outside of these very large organisations, academics typically have to apply to get even limited time on a supercomputer for a particular scientific purpose. This is quite a high barrier for entry which is potentially holding back a lot of significant research.

“Our hope for our own research now is to apply these techniques to brain-inspired machine learning so that we can help solve problems that biological brains excel at but which are currently beyond simulations.

“As well as the advances we have demonstrated in procedural connectivity in the context of GPU hardware, we also believe that there is also potential for developing new types of neuromorphic hardware built from the ground up for procedural connectivity. Key components could be implemented directly in hardware which could lead to even more truly significant compute time improvements.”

The research builds on the pioneering work of US researcher Eugene Izhikevich who pioneered a similar method for large-scale brain simulation in 2006.

At the time, computers were too slow for the method to be widely applicable meaning simulating large-scale brain models has until now only been possible for a minority of researchers privileged to have access to supercomputer systems.

The researchers applied Izhikevich’s technique to a modern GPU, with approximately 2,000 times the computing power available 15 years ago, to create a cutting-edge model of a Macaque’s visual cortex (with 4.13 × 106 neurons and 24.2 × 109 synapse) which previously could only be simulated on a supercomputer.

The researchers’ GPU accelerated spiking neural network simulator uses the large amount of computational power available on a GPU to ‘procedurally’ generate connectivity and synaptic weights ‘on the go’ as spikes are triggered – removing the need to store connectivity data in memory.

Initialization of the researchers’ model took six minutes and simulation of each biological second took 7.7 min in the ground state and 8.4 min in the resting state- up to 35 % less time than a previous supercomputer simulation. In 2018, one rack of an IBM Blue Gene/Q supercomputer initialization of the model took around five minutes and simulating one second of biological time took approximately 12 minutes.

Prof Nowotny, Professor of Informatics at the University of Sussex, said: “Large-scale simulations of spiking neural network models are an important tool for improving our understanding of the dynamics and ultimately the function of brains. However, even small mammals such as mice have on the order of 1 × 1012 synaptic connections meaning that simulations require several terabytes of data – an unrealistic memory requirement for a single desktop machine.

“This research is a game-changer for computational Neuroscience and AI researchers who can now simulate brain circuits on their local workstations, but it also allows people outside academia to turn their gaming PC into a supercomputer and run large neural networks.”

Here’s a link to and a citation for the paper,

Larger GPU-accelerated brain simulations with procedural connectivity by James C. Knight & Thomas Nowotny. Nature Computational Science (2021) DOI: DOIhttps://doi.org/10.1038/s43588-020-00022-7 Published: 01 February 2021

This paper is behind a paywall.

The need for Wi-Fi speed

Yes, it’s a ‘Top Gun’ movie quote (1986) or more accurately, a paraphrasing of Tom Cruise’s line “I feel the need for speed.” I understand there’s a sequel, which is due to arrive in movie theatres or elsewhere at sometime in this decade.

Where wireless and WiFi are concerned I think there is a dog/poodle situation. ‘Dog’ is a general description where ‘poodle’ is a specific description. All poodles (specific) are dogs (general) but not all dogs are poodles. So, wireless is a general description and Wi-Fi is a specific type of wireless communication. All WiFi is wireless but not all wireless is Wi-Fi. That said, onto the research.

Given what seems to be an insatiable desire for speed in the wireless world, the quote seems quite à propos in relation to the latest work on quantum tunneling and its impact on Wi-Fi speed from the Moscow Institute of Physics and Technology (from a February 3, 2021 news item on phys.org,

Scientists from MIPT (Moscow Institute of Physics and Technology), Moscow Pedagogical State University and the University of Manchester have created a highly sensitive terahertz detector based on the effect of quantum-mechanical tunneling in graphene. The sensitivity of the device is already superior to commercially available analogs based on semiconductors and superconductors, which opens up prospects for applications of the graphene detector in wireless communications, security systems, radio astronomy, and medical diagnostics. The research results are published in Nature Communications.

A February 3, 2021 MIPT press release (also on EurekAlert), which originated the news item, provides more technical detail about the work and its relation WiFi,

Information transfer in wireless networks is based on transformation of a high-frequency continuous electromagnetic wave into a discrete sequence of bits. This technique is known as signal modulation. To transfer the bits faster, one has to increase the modulation frequency. However, this requires synchronous increase in carrier frequency. A common FM-radio transmits at frequencies of hundred megahertz, a Wi-Fi receiver uses signals of roughly five gigahertz frequency, while the 5G mobile networks can transmit up to 20 gigahertz signals. This is far from the limit, and further increase in carrier frequency admits a proportional increase in data transfer rates. Unfortunately, picking up signals with hundred gigahertz frequencies and higher is an increasingly challenging problem.

A typical receiver used in wireless communications consists of a transistor-based amplifier of weak signals and a demodulator that rectifies the sequence of bits from the modulated signal. This scheme originated in the age of radio and television, and becomes inefficient at frequencies of hundreds of gigahertz desirable for mobile systems. The fact is that most of the existing transistors aren’t fast enough to recharge at such a high frequency.

An evolutionary way to solve this problem is just to increase the maximum operation frequency of a transistor. Most specialists in the area of nanoelectronics work hard in this direction. A revolutionary way to solve the problem was theoretically proposed in the beginning of 1990’s by physicists Michael Dyakonov and Michael Shur, and realized, among others, by the group of authors in 2018. It implies abandoning active amplification by transistor, and abandoning a separate demodulator. What’s left in the circuit is a single transistor, but its role is now different. It transforms a modulated signal into bit sequence or voice signal by itself, due to non-linear relation between its current and voltage drop.

In the present work, the authors have proved that the detection of a terahertz signal is very efficient in the so-called tunneling field-effect transistor. To understand its work, one can just recall the principle of an electromechanical relay, where the passage of current through control contacts leads to a mechanical connection between two conductors and, hence, to the emergence of current. In a tunneling transistor, applying voltage to the control contact (termed as ”gate”) leads to alignment of the energy levels of the source and channel. This also leads to the flow of current. A distinctive feature of a tunneling transistor is its very strong sensitivity to control voltage. Even a small “detuning” of energy levels is enough to interrupt the subtle process of quantum mechanical tunneling. Similarly, a small voltage at the control gate is able to “connect” the levels and initiate the tunneling current

“The idea of ??a strong reaction of a tunneling transistor to low voltages is known for about fifteen years,” says Dr. Dmitry Svintsov, one of the authors of the study, head of the laboratory for optoelectronics of two-dimensional materials at the MIPT center for photonics and 2D materials. “But it’s been known only in the community of low-power electronics. No one realized before us that the same property of a tunneling transistor can be applied in the technology of terahertz detectors. Georgy Alymov (co-author of the study) and I were lucky to work in both areas. We realized then: if the transistor is opened and closed at a low power of the control signal, then it should also be good in picking up weak signals from the ambient surrounding. “

The created device is based on bilayer graphene, a unique material in which the position of energy levels (more strictly, the band structure) can be controlled using an electric voltage. This allowed the authors to switch between classical transport and quantum tunneling transport within a single device, with just a change in the polarities of the voltage at the control contacts. This possibility is of extreme importance for an accurate comparison of the detecting ability of a classical and quantum tunneling transistor.

The experiment showed that the sensitivity of the device in the tunnelling mode is few orders of magnitude higher than that in the classical transport mode. The minimum signal distinguishable by the detector against the noisy background already competes with that of commercially available superconducting and semiconductor bolometers. However, this is not the limit – the sensitivity of the detector can be further increased in “cleaner” devices with a low concentration of residual impurities. The developed detection theory, tested by the experiment, shows that the sensitivity of the “optimal” detector can be a hundred times higher.

“The current characteristics give rise to great hopes for the creation of fast and sensitive detectors for wireless communications,” says the author of the work, Dr. Denis Bandurin. And this area is not limited to graphene and is not limited to tunnel transistors. We expect that, with the same success, a remarkable detector can be created, for example, based on an electrically controlled phase transition. Graphene turned out to be just a good launching pad here, just a door, behind which is a whole world of exciting new research.”

The results presented in this paper are an example of a successful collaboration between several research groups. The authors note that it is this format of work that allows them to obtain world-class scientific results. For example, earlier, the same team of scientists demonstrated how waves in the electron sea of ??graphene can contribute to the development of terahertz technology. “In an era of rapidly evolving technology, it is becoming increasingly difficult to achieve competitive results.” – comments Dr. Georgy Fedorov, deputy head of the nanocarbon materials laboratory, MIPT, – “Only by combining the efforts and expertise of several groups can we successfully realize the most difficult tasks and achieve the most ambitious goals, which we will continue to do.”

Here’s a link to and a citation for the latest paper,

Tunnel field-effect transistors for sensitive terahertz detection by I. Gayduchenko, S. G. Xu, G. Alymov, M. Moskotin, I. Tretyakov, T. Taniguchi, K. Watanabe, G. Goltsman, A. K. Geim, G. Fedorov, D. Svintsov & D. A. Bandurin. Nature Communications volume 12, Article number: 543 (2021) DOI: https://doi.org/10.1038/s41467-020-20721-z Published: 22 January 2021

This paper is open access.

One last comment, I’m assuming since the University of Manchester is mentioned that A. K. Geim is Sir Andre K. Geim (you can look him up here is you’re not familiar with his role in the graphene research community).

Sunlight makes transparent wood even lighter and stronger

Researchers at the University of Maryland (US) have found a way to make their wood transparent by using sunlight. From a February 2, 2021 news article by Bob Yirka on phys.org (Note: Links have been removed),

A team of researchers at the University of Maryland, has found a new way to make wood transparent. In their paper published in the journal Science Advances, the group describes their process and why they believe it is better than the old process.

The conventional method for making wood transparent involves using chemicals to remove the lignin—a process that takes a long time, produces a lot of liquid waste and results in weaker wood. In this new effort, the researchers have found a way to make wood transparent without having to remove the lignin.

The process involved changing the lignin rather than removing it. The researchers removed lignin molecules that are involved in producing wood color. First, they applied hydrogen peroxide to the wood surface and then exposed the treated wood to UV light (or natural sunlight). The wood was then soaked in ethanol to further clean it. Next, they filled in the pores with clear epoxy to make the wood smooth.

Caption: Solar-assisted large-scale fabrication of transparent wood. (A) Schematic showing the potential large-scale fabrication of transparent wood based on the rotary wood cutting method and the solar-assisted chemical brushing process. (B) The outdoor fabrication of lignin-modified wood with a length of 1 m [9 August 2019 (the summer months) at 13:00 p.m. (solar noon), the Global Solar UV Index (UVI): 7 to 8]. (C) Digital photo of a piece of large transparent wood (400 mm by 110 mm by 1 mm). (D) The energy consumption, chemical cost, and waste emission for the solar-assisted chemical brushing process and NaClO2 solution–based delignification process. (E) A radar plot showing a comparison of the fabrication process for transparent wood. Photo credit: Qinqin Xia, University of Maryland, College Park. [downloaded from https://advances.sciencemag.org/content/7/5/eabd7342]

Bob McDonald in a February 5, 2021 posting on his Canadian Broadcasting Corporation (CBC) Quirks & Quarks blog provides a more detailed description of the new ‘solar-based transparency process’,

Early attempts to make transparent wood involved removing the lignin, but this involved hazardous chemicals, high temperatures and a lot of time, making the product expensive and somewhat brittle. The new technique is so cheap and easy it could literally be done in a backyard.

Starting with planks of wood a metre long and one millimetre thick, the scientists simply brushed on a solution of hydrogen peroxide using an ordinary paint brush. When left in the sun, or under a UV lamp for an hour or so, the peroxide bleached out the brown chromophores but left the lignin intact, so the wood turned white.

Next, they infused the wood with a tough transparent epoxy designed for marine use, which filled in the spaces and pores in the wood and then hardened. This made the white wood transparent.

As window material, it would be much more resistant to accidental breakage. The clear wood is lighter than glass, with better insulating properties, which is important because windows are a major source of heat loss in buildings. It also might take less energy to manufacture clear wood because there are no high temperatures involved.

Many different types of wood, from balsa to oak, can be made transparent, and it doesn’t matter if it is cut along the grain or against it. If the transparent wood is made a little thicker, it would be strong enough to become part of the structure of a building, so there could be entire transparent wooden walls.

Adele Peters in her February 2, 2021 article for Fast Company describes the work in Maryland and includes some information about other innovative and possibly sustainable uses of wood (Note: Links have been removed),

It’s [transparent wood] just one of a number of ways scientists and engineers are rethinking how we can use this renewable resource in construction. Skyscrapers made entirely out of wood are gaining popularity in cities around the world. And scientists recently discovered a technique to grow wood in a lab, opening up the possibility of using wood without having to chop down a forest.

There were three previous posts here about this work at the University of Maryland,

University of Maryland looks into transparent wood May 11, 2016 posting

Transparent wood more efficient than glass in windows? Sept, 8, 2016 posting

Glass-like wood windows protect against UV rays and insulate heat October 21, 2020 posting

I have this posting, which is also from 2016 but features work in Sweden,

Transparent wood instead of glass for window panes? April 1, 2016 posting

Getting back to the latest work from the University of Maryland, here’s a link to and a citation for the paper,

Solar-assisted fabrication of large-scale, patternable transparent wood by Qinqin Xia, Chaoji Chen, Tian Li, Shuaiming He, Jinlong Gao, Xizheng Wang and Liangbing Hu. Science Advances Vol. 7, no. 5, eabd7342 DOI: 10.1126/sciadv.abd7342 Published: 27 Jan 2021

This paper is open access.

One last item, Liangbing Hu has founded a company InventWood for commercializing the work he and his colleagues have done at the University of Maryland.

“Wolves, Livestock, and the Physical and Social Environments,” an April 14, 2021 event in celebration of Italian Research in the World Day

ARPICO (Society of Italian Researchers & Professionals in Western Canada) is presenting a pre-celebration event to honour Italian Research in the World Day (April 15, 2021). Take special note: the event is being held the day before.

Before launching into the announcement, bravo to the organizers! ARPICO consistently offers the most comprehensive details about their events of any group that contacts me. One more thing, to date, they are the only group that have described which technology they’re using for the webcast and explicitly address any concerns about downloading software (you don’t have to) or about personal information. (Check out Technical Instruction here.)

Here are the details from ARPICO’s April 4, 2021 announcement (received via email),

We hope everyone is doing well and being safe while we attempt to outlast this pandemic. In the meanwhile, from the comfort of our homes, we hope to be able to continue to share with you informative lectures to entertain and stimulate thought.

It is our pleasure, in collaboration with the Consulate General of Italy in Vancouver, to announce that ARPICO’s next public event will be held on April 14th, 2021 at 7:00 PM, in celebration of Italian Research in the World Day. Italian Research in the World Day was instituted starting in 2018 as part of the Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo. The celebration day was chosen by government decree to be every year on April 15 on the anniversary of the birth of Leonardo da Vinci.

The main objective of the Italian Research Day in the World is to value the quality and competencies of Italian researchers abroad, but also to promote concrete actions and investments to allow Italian researchers to continue pursuing their careers in their homeland. Italy wishes to enable Italian talents to return from abroad as well as to become an attractive environment for foreign researchers.

This year we are pleased to have Professor Marco Musiani, an academic in biological sciences, share with us a lecture titled “Wolves, Livestock, and the Physical and Social Environments.” An abstract and short professional biography are provided below.

We have chosen BlueJeans as the videoconferencing platform, for which you will only require a web browser (Chrome, Firefox, Edge, Safari, Opera are all supported). Full detailed instructions on how the virtual event will unfold are available on the EventBrite listing here in the Technical Instruction section.

If participants wish to donate to ARPICO, this can be done within EventBrite; this would be greatly appreciated in order to help us continue to build upon our scholarship fund, and to defray the cost of the videoconferencing license.

We look forward to seeing everyone there.

The evening agenda is as follows:

  • 6:45PM – BlueJeans Presentation link becomes active and registrants may join.
    • If you experience any technical details please email us at info@arpico.ca and we will attempt to assist you as best we can.
  • 7:00pm – Start of the evening Event with introductions & lecture by Prof. Marco Musiani
  • ~8:00 pm – Q & A Period via BlueJeans Chat Interface

If you have not already done so, please register for the event by visiting the EventBrite link or RSVPing to info@arpico.ca.

Further details are also available at arpico.ca and Eventbrite.

Wolves, Livestock, and the Physical and Social Environments

Due primarily to wolf predation on livestock (depredation), some groups oppose wolf (Canis lupus) conservation, which is an objective for large sectors of the public. Prof. Musiani’s talk will compare wolf depredation of sheep in Southern Europe to wolf depredation of beef cattle in the US and Canada, taking into account the differences in social and economic contexts. It will detail where and when wolf attacks happen, and what environmental factors promote such attacks. Livestock depredation by wolves is a cost of wolf conservation borne by livestock producers, which creates conflict between producers, wolves and organizations involved in wolf conservation and management. Compensation is the main tool used to mitigate the costs of depredation, but this tool may be limited at improving tolerance for wolves. In poorer countries compensation funds might not be available. Other lethal and nonlethal tools used to manage the problem will also be analysed. Wolf depredation may be a small economic cost to the industry, although it may be a significant cost to affected producers as these costs are not equitably distributed across the industry. Prof. Musiani maintains that conservation groups should consider the potential consequences of all of these ecological and economic trends. Specifically, declining sheep or cattle price and the steady increase in land price might induce conversion of agricultural land to rural-residential developments, which could negatively impact the whole environment via large scale habitat change and increased human presence.

Marco Musiani is a Professor in the Dept. of Biological Sciences, Faculty of Science, University of Calgary. He also has a Joint Appointment with the Faculty of Veterinary Medicine in Calgary. His lab has a strong focus on landscape ecology, molecular ecology, and wildlife conservation.

Marco is Principal Investigator on projects on caribou, elk, moose, wolves, grizzlies and other wildlife species throughout the Rocky Mountains and Foothills regions of Canada. All such projects are run together with graduate students and have applications towards impact assessment, mainly of human infrastructure.

His focus is on academic matters. However, he also serves as reviewer for research and management projects, and acted as a consultant for the Food and Agriculture Organisation of the United Nations (working on conflicts with wolves).

WHEN (EVENT): Wed, April 14th, 2021 at 7:00PM (BlueJeans link active at 6:45PM)

WHERE: Online using the BlueJeans Conferencing platform.

RSVP: Please register for tickets at EventBrite

Tickets are Needed

Tickets for this event are FREE. Due to limited seating at the venue, we ask that each household register once and watch the presentation together on a single device.       You will receive the event videoconferencing invite link via email in your registration confirmation.

FAQs

  • Where can I contact the organizer with any questions? info@arpico.ca
  • Can I update my registration information? Yes. If you have any questions, contact us at info@arpico.ca
  • I am having trouble using EventBrite and cannot reserve my ticket(s). Can someone at ARPICO help me with my ticket reservation? Of course, simply send your ticket request to us at info@arpico.ca so we help you.

You can find the programme announcement on this ARPICO event page.

eBOSS maps the universe: a Perimeter Institute (PI) webcast on April 7, 2021

This video features information about eBOSS from a number of researchers including Will Percival, the speaker on the April 7, 2021 PI webcast,

From an April 2, 2021 PI notice (received via email),

Mapping the Universe with eBOSS
WEDNESDAY, APRIL 7 [2021] at 7 pm ET

As Douglas Adams correctly wrote in The Hitchhiker’s Guide to the Galaxy, “Space is big. You just won’t believe how vastly, hugely, mind-bogglingly big it is. I mean, you may think it’s a long way down the road to the drug store, but that’s just peanuts to space.”

Few people understand the vastness of space as well as Will Percival. Percival is a cosmologist working primarily on galaxy surveys, using the positions of galaxies to measure the cosmological expansion rate and growth of cosmological structure. He is the Survey Scientist for the extended Baryon Oscillation Spectroscopic Survey (eBOSS), which created the largest three-dimensional map of the universe ever made using the positions of millions of galaxies and quasars dating back roughly 11 billion years.

In his April 7 [2021] Perimeter Public Lecture webcast, Percival will aim to help the audience grasp the enormity of space using the latest results from eBOSS, exploring the profound insights they provide into the physics of our universe.

You can watch the webcast on April 7, 2021 at 4 pm PT (7 pm ET) here on the Mapping the Universe with eBOSS event page.