Monthly Archives: October 2019

Israeli startup (Nanomedic) and a ‘ray’ gun that shoots wound-healing skin

[downloaded from https://uploads.neatorama.com/images/posts/967/107/107967/Spray-on-Nanofiber-Skin-May-Improve-Burn-and-Wound-Care_0-x.jpg?v=10727]

Where I see a ‘ray’ gun, Rina Raphael, author of a July 6, 2019 article for Fast tCompany, sees a water pistol (Note: Links have been removed),

Imagine if bandaging looked a little more like, well, a water gun?

Israeli startup Nanomedic Technologies Ltd., a subsidiary of medical device company Nicast, has invented a new mechanical contraption to treat burns, wounds, and surgical injuries by mimicking human tissue. Shaped like a children’s toy, the lightweight SpinCare emits a proprietary nanofiber “second skin” that completely covers the area that needs to heal.

All one needs to do is aim, squeeze the two triggers, and fire off an electrospun polymer material that attaches to the skin.

The Nanomedic spray method avoids any need to come into direct contact with the wound. In that sense, it completely sidesteps painful routine bandage dressings. The transient skin then fully develops into a secure physical barrier with tough adherence. Once new skin is regenerated, usually between two to three weeks (depending on the individual’s heal time), the layer naturally peels off.

“You don’t replace it,” explains Nanomedic CEO Dr. Chen Barak. “You put it only once—on the day of application—and it remains there until it feels the new layer of skin healed.”

“It’s the same model as an espresso machine,” says Barak.

The SpinCare holds single-use ampoules containing Nanomedic’s polymer formulation. Once the capsule is firmly in place, one activates the device roughly eight inches towards the wound. Pressing the trigger activates the electron-spinning process, which sprays a web-like a layer of nano fibers directly on the wound.

The solution adjusts to the morphology of the wound, thereby creating a transient skin layer that imitates the skin structure’s human tissue. It’s a transparent, protective film that then allows the patient and doctor to monitor progress. Once the wound has healed and developed a new layer of skin, the SpinCare “bandage” falls off on its own.

The product is already being tested in hospitals. In the coming year, following FDA clearance, Nanomedic plans to expand to emergency rooms, ambulances, military use, and disaster relief response like fire truck companies. The global wound healing market is expected to hit $35 billion by 2025, according to a report by Transparency Market Research.

Nanomedic joins other researchers attempting to reimagine the wound healing process. Engineers at the University of Wisconsin-Madison, for example, created a new kind of protective bandage that sends a mild electrical stimulation, thereby “dramatically” reducing the time deep surgical wounds take to heal.

As for the the playful (yet functional) design, it resembles other medical tools utilizing the point-and-shoot feature. Researchers at the Technion-Israel Institute of Technology and Boston Children’s Hospital recently revealed a “hot-glue gun” that melds torn human tissues together. The medical glue is meant to replace painful and often scarring stitches and staples.

Down the line, Nanomedic plans to enter the in-home care market, where it believes it can better assist caretakers for treatment of chronic wounds, such as pressure ulcers. The chronic wounds segment is projected to hold the dominant share in the wound healing market due to aging populations.

But a bigger opportunity lies in the multiple uses the SpinCare can ultimately provide. It is, in essence, a platform technology that could benefit multiple categories, not just medical wound care. Currently, the SpinCare’s capsules do not contain any active ingredients.

Nanomedic is already researching how to add different additives, such as antibacterial compliments, collagen, silicone, cannabinoids—and, eventually, stem cells and cellular treatments.

Such advancements would propel the device to new markets, like plastic surgery, aesthetics, and dermatology. The latter, for example, spans “burns” caused by deep, cosmetic laser peels.

“Because it is a solution, we can combine additives inside,” explains Katz. “By that, we are transforming the transient skin into a drug delivery system and slow release system.”

Nanomedic is still at the premarket phase, [emphasis mine] having concluded one clinical trial related to the treatment of split graft donor site wounds and currently engaged in two ongoing burn studies. Barak anticipates FDA approval will take between nine to 12 months, during which the company will focus on building manufacturing lines and preparing for a European launch in early 2020.

According to the startup’s estimates, the product’s final price (not yet determined) will be far more affordable than traditional dressings. Nanomedic has raised $7 million in funding to date, including a grant by the EU’s Horizon 2020 SME Instrument program.

Barak believes Nanocare [sic] brings a highly cost-effective alternative to the healthcare system, but more than anything, she’s proud that SpinCare, above all else, mitigates patient pain and hassle. Some users, the company reports, are able to return to work and physical activity right away.

The Nanomedic website can be found here. The company has also produced a video featuring SpinCare,

There’s a bit more about the technology (I’m especially interested in the electrospinning) on Nanomedic’s Technology webpage,

Electrospinning technology allows the development of a wide range of products and devices, with tailored composition, geometry and morphology.

Almost any natural or synthetic polymer can be electrospun to create a nanofibrous mat. The intrinsic structure of the electrospun products, which mimics the natural extra cellular matrix (ECM), encourages quick and efficient tissue integration and minimizes medical complications.

Raphael’s article and the Nanomedic website offer more detail to what you can see in the excerpts provided here. If you have the time, I recommend checking out both.

CRISPR [clustered regularly interspaced short palindromic repeats) has a metaphor issue?

Elinor Hortie at the University of Sydney (Australia) has written a very interesting essay about CRISPR ‘scissors’, a metaphor she find misleading. From Hortie’s July 4, 2019 essay on The Conversation,

Last week I read an article about CRISPR, the latest tool scientists are using to edit DNA. It was a great piece – well researched, beautifully written, factually accurate. It covered some of the amazing projects scientist are working on using CRISPR, like bringing animals back from extinction and curing diseases. It also gave me the heebies, but not for the reason you might expect.

Take CRISPR. It’s most often described as a pair of molecular scissors that can be used to modify DNA, the blueprint for life. And when we read that, I think most of us start imagining something like a child with her Lego bricks strewn in front of her, instruction booklet in one hand, scissors in the other. One set of pictograms, one model; one gene, one disease; one snip, one cure. We’re there in a blink. CRISPR seems like it can work miracles.

I want to stress that the molecular scissors metaphor is pretty damn accurate as far as it goes. But in focusing on the relatively simple relationship between CRISPR and DNA, we miss the far more complicated relationship between DNA and the rest of the body. This metaphor ignores an entire ecosystem of moving parts that are crucial for understanding the awe-inspiring, absolutely insane thing scientists are trying to do when they attempt gene editing.

Hortie proposes a different metaphor,

In my research I use CRISPR from time to time. To design experiments and interpret results effectively, I need a solid way to conceptualise what it can (and can’t) do. I do not think of CRISPR as molecular scissors.

Instead I imagine a city. The greater metropolis represents the body, the suburbs are organs, the buildings are cells, the people are proteins, and the internet is DNA.

In this metaphor CRISPR is malware. More precisely, CRISPR is malware that can search for any chosen 20-character line of code and corrupt it. This is not a perfect metaphor by any stretch, but it gets me closer to understanding than almost anything else.

Hortie offers an example from her own work demonstrating how a CRISPR ‘malware’ metaphor/analogy more accurately represents the experience of using the gene-editing system,

As an example, let’s look at Alzheimer’s, one of the diseases CRISPR is being touted to cure. The headlines are usually some variation of “CRISPR to correct Alzheimer’s gene!”, and the molecular scissors analogy is never far behind.

It seems reasonable to me that someone could read those words and assume that chopping away the disease-gene with the DNA-shears should be relatively simple. When the cure doesn’t appear within five years, I can understand why that same person would come to ask me why Big Pharma is holding out (this has happened to me more than once).

Now let’s see how it looks using the malware metaphor. The consensus is that Alzheimer’s manifests when a specific protein goes rogue, causing damage to cells and thereby stopping things from working properly inside the brain. It might have a genetic cause, but it’s complicated. In our allegorical city, what would that look like?

I think riots would come close. Rampaging humans (proteins) destroying houses and property (cells), thereby seriously derailing the normal functioning of a specific suburb (the brain).

And you want to fix that with malware?

It’s hard to predict the domino effect

Can you imagine for a second trying to stop soccer hooligans smashing things on the streets of Buenos Aires by corrupting roughly three words in the FIFA by-laws with what’s essentially a jazzed-up command-F function?

I’m not saying it’s not possible – it absolutely is.

But think of all the prior knowledge you need, and all the pieces that have to fall in place for that to work. You’d have to know that the riots are caused by football fans. You’d have to understand which rule was bothering them (heaven help you if it’s more than one), and if that rule causes drama at every game. You’d have to find a 20-character phrase that, when corrupted, would change how the rule was read, rather than just making a trivial typo.

You’d have to know that the relevant footballers have access to the updated rule book, and you’d have to know there were no other regulations making your chosen rule redundant. You’d have to know there aren’t any similar 20-character phrases anywhere on the internet that might get corrupted at the same time (like in the rules for presidential succession say, or in the nuclear warhead codes). Even then you’d still be rolling the dice.

Even if you stop the riots successfully, which of us really know the long-term consequences of changing the World Game forever?

That’s stretching the metaphor as Hortie notes herself later in the essay. And, she’s not the only one concerned about metaphors and CRISPR. There’s a December 8, 0217 article by Rebecca Robbins for STAT news which covers ten analogies/metaphors ranked from worst to best,

… Some of these analogies are better than others. To compile the definitive ranking, I sat down with STAT’s senior science writer Sharon Begley, a wordsmith who has herself compared CRISPR to “1,000 monkeys editing a Word document” and the kind of dog “you can train to retrieve everything from Frisbees to slippers to a cold beer.”

Sharon and I evaluated each of the metaphors we found by considering these three questions: Is it creative? Is it clear? And is it accurate? Below, our rankings of CRISPR analogies, ordered from worst to best:

0. A knockout punch


9. The hand of God


8. A bomb removal squad

It’s a very interesting list with a description of why each does and doesn’t work as an analogy. By the way, ‘scissors’ was not the top analogy. The number one spot went to ‘A Swiss army knife’.

There are many more essays than I would have believed concerning CRISPR and metaphors/analogies. I’m glad to see them as the language we use to describe our work and our world helps us understand it and can constrain us in unexpected ways. Critiques such as Hortie’s and the others can help us to refine the language and to recognize its limitations.

h/t July 4, 2019 news item on phys.org

Preventing corrosion in oil pipelines at the nanoscale

A June 7, 2019 news item on Azonano announces research into the process of oil pipeline corrosion at the nanoscale (Note: A link has been removed),

Steel pipes tend to rust and sooner or later fail. To anticipate disasters, oil companies and others have developed computer models to foretell when replacement is necessary. However, if the models themselves are incorrect, they can be amended only through experience, an expensive problem if detection happens too late.

Currently, scientists at Sandia National Laboratories, the Department of Energy’s Center for Integrated Nanotechnologies and the Aramco Research Center in Boston, have discovered that a specific form of nanoscale corrosion is responsible for suddenly diminishing the working life of steel pipes, according to a paper recently published in Nature’s Materials Degradation journal.

A June 6, 2019 Sandia National Laboratories news release (also on EurekAlert), which originated the news item, provides more technical detail,

Using transmission electron microscopes, which shoot electrons through targets to take pictures, the researchers were able to pin the root of the problem on a triple junction formed by a grain of cementite — a compound of carbon and iron — and two grains of ferrite, a type of iron. This junction forms frequently during most methods of fashioning steel pipe.

Iron atoms slip-sliding away

The researchers found that disorder in the atomic structure of those triple junctions made it easier for the corrosive solution to remove iron atoms along that interface.
In the experiment, the corrosive process stopped when the triple junction had been consumed by corrosion, but the crevice left behind allowed the corrosive solution to attack the interior of the steel.

“We thought of a possible solution for forming new pipe, based on changing the microstructure of the steel surface during forging, but it still needs to be tested and have a patent filed if it works,” said Sandia’s principle investigator Katherine Jungjohann, a paper author and lead microscopist. “But now we think we know where the major problem is.”

Aramco senior research scientist Steven Hayden added, “This was the world’s first real-time observation of nanoscale corrosion in a real-world material — carbon steel — which is the most prevalent type of steel used in infrastructure worldwide. Through it, we identified the types of interfaces and mechanisms that play a role in the initiation and progression of localized steel corrosion. The work is already being translated into models used to prevent corrosion-related catastrophes like infrastructure collapse and pipeline breaks.”

To mimic the chemical exposure of pipe in the field, where the expensive, delicate microscopes could not be moved, very thin pipe samples were exposed at Sandia to a variety of chemicals known to pass through oil pipelines.

Sandia researcher and paper author Khalid Hattar put a dry sample in a vacuum and used a transmission electron microscope to create maps of the steel grain types and their orientation, much as a pilot in a plane might use a camera to create area maps of farmland and roads, except that Hattar’s maps had approximately 6 nanometers resolution. (A nanometer is one-billionth of a meter.)

“By comparing these maps before and after the liquid corrosion experiments, a direct identification of the first phase that fell out of the samples could be identified, essentially identifying the weakest link in the internal microstructure,” Hattar said.

Sandia researcher and paper author Paul Kotula said, “The sample we analyzed was considered a low-carbon steel, but it has relatively high-carbon inclusions of cementite which are the sites of localized corrosion attacks.

“Our transmission electron microscopes were a key piece of this work, allowing us to image the sample, observe the corrosion process, and do microanalysis before and after the corrosion occurred to identify the part played by the ferrite and cementite grains and the corrosion product.”

When Hayden first started working in corrosion research, he said, “I was daunted at how complex and poorly understood corrosion is. This is largely because realistic experiments would involve observing complex materials like steel in liquid environments and with nanoscale resolution, and the technology to accomplish such a feat had only recently been developed and yet to be applied to corrosion. Now we are optimistic that further work at Sandia and the Center for Integrated Nanotechnologies will allow us to rethink manufacturing processes to minimize the expression of the susceptible nanostructures that render the steel vulnerable to accelerated decay mechanisms.”

Invisible path of localized corrosion

Localized corrosion is different from uniform corrosion. The latter occurs in bulk form and is highly predictable. The former is invisible, creating a pathway observable only at its endpoint and increasing bulk corrosion rates by making it easier for corrosion to spread.

“A better understanding of the mechanisms by which corrosion initiates and progresses at these types of interfaces in steel will be key to mitigating corrosion-related losses,” according to the paper.

Here’s a link to and a citation for the paper,

Localized corrosion of low-carbon steel at the nanoscale by Steven C. Hayden, Claire Chisholm, Rachael O. Grudt, Jeffery A. Aguiar, William M. Mook, Paul G. Kotula, Tatiana S. Pilyugina, Daniel C. Bufford, Khalid Hattar, Timothy J. Kucharski, Ihsan M. Taie, Michele L. Ostraat & Katherine L. Jungjohann. npj Materials Degradation volume 3, Article number: 17 (2019) DOI: https://doi.org/10.1038/s41529-019-0078-1 Published 12 April 2019

This paper is open access.

My love is a black, black rose that purifies water

Cockrell School of Engineering, The University of Texas at Austin

The device you see above was apparently inspired by a rose. Personally, Ill need to take the scientists’ word for this image brings to my mind, lava lamps like the one you see below.

A blue lava lamp Credit: Risa1029 – Own work [downloaded from https://en.wikipedia.org/wiki/Lava_lamp#/media/File:Blue_Lava_lamp.JPG]

In any event, the ‘black rose’ collects and purifies water according to a May 29, 2019 University of Texas at Austin news release (also on EurekAlert),

The rose may be one of the most iconic symbols of the fragility of love in popular culture, but now the flower could hold more than just symbolic value. A new device for collecting and purifying water, developed at The University of Texas at Austin, was inspired by a rose and, while more engineered than enchanted, is a dramatic improvement on current methods. Each flower-like structure costs less than 2 cents and can produce more than half a gallon of water per hour per square meter.

A team led by associate professor Donglei (Emma) Fan in the Cockrell School of Engineering’s Walker Department of Mechanical Engineering developed a new approach to solar steaming for water production – a technique that uses energy from sunlight to separate salt and other impurities from water through evaporation.

In a paper published in the most recent issue of the journal Advanced Materials, the authors outline how an origami rose provided the inspiration for developing a new kind of solar-steaming system made from layered, black paper sheets shaped into petals. Attached to a stem-like tube that collects untreated water from any water source, the 3D rose shape makes it easier for the structure to collect and retain more liquid.

Current solar-steaming technologies are usually expensive, bulky and produce limited results. The team’s method uses inexpensive materials that are portable and lightweight. Oh, and it also looks just like a black-petaled rose in a glass jar.

Those in the know would more accurately describe it as a portable low-pressure controlled solar-steaming-collection “unisystem.” But its resemblance to a flower is no coincidence.

“We were searching for more efficient ways to apply the solar-steaming technique for water production by using black filtered paper coated with a special type of polymer, known as polypyrrole,” Fan said.

Polypyrrole is a material known for its photothermal properties, meaning it’s particularly good at converting solar light into thermal heat.

Fan and her team experimented with a number of different ways to shape the paper to see what was best for achieving optimal water retention levels. They began by placing single, round layers of the coated paper flat on the ground under direct sunlight. The single sheets showed promise as water collectors but not in sufficient amounts. After toying with a few other shapes, Fan was inspired by a book she read in high school. Although not about roses per se, “The Black Tulip” by Alexandre Dumas gave her the idea to try using a flower-like shape, and she discovered the rose to be ideal. Its structure allowed more direct sunlight to hit the photothermic material – with more internal reflections – than other floral shapes and also provided enlarged surface area for water vapor to dissipate from the material.

The device collects water through its stem-like tube – feeding it to the flower-shaped structure on top. It can also collect rain drops coming from above. Water finds its way to the petals where the polypyrrole material coating the flower turns the water into steam. Impurities naturally separate from water when condensed in this way.

“We designed the purification-collection unisystem to include a connection point for a low-pressure pump to help condense the water more effectively,” said Weigu Li, a Ph.D. candidate in Fan’s lab and lead author on the paper. “Once it is condensed, the glass jar is designed to be compact, sturdy and secure for storing clean water.”

The device removes any contamination from heavy metals and bacteria, and it removes salt from seawater, producing clean water that meets drinking standard requirements set by the World Health Organization.

“Our rational design and low-cost fabrication of 3D origami photothermal materials represents a first-of-its-kind portable low-pressure solar-steaming-collection system,” Li said. “This could inspire new paradigms of solar-steaming technologies in clean water production for individuals and homes.”

Here’s a citation and another link to the paper,

Portable Low‐Pressure Solar Steaming‐Collection Unisystem with Polypyrrole Origamis by Weigu Li, Zheng Li, Karina Bertelsmann, Donglei Emma Fan. Advanced Materials DOI: https://doi.org/10.1002/adma.201900720 First published: 28 May 2019

This paper is behind a paywall.

Turning wasted energy back into electricity

This work comes from the King Abdullah University of Science and Technology (KAUST; Saudi Arabia). From a June 27, 2019 news item on Nanowerk (Note: A link has been removed),

Some of the vast amount of wasted energy that machines and devices emit as heat could be recaptured using an inexpensive nanomaterial developed at KAUST. This thermoelectric nanomaterial could capture the heat lost by devices, ranging from mobile phones to vehicle engines, and turn it directly back into useful electricity (Advanced Energy Materials, “Low-temperature-processed colloidal quantum dots as building blocks for thermoelectrics”).

A June 27, 2019 KAUST press release, which originated the news item, provides more detail,

The nanomaterial is made using a low-temperature solution-based production process, making it suitable for coating on flexible plastics for use almost anywhere.

“Among the many renewable energy sources, waste heat has not been widely considered,” says Mohamad Nugraha, a postdoctoral researcher in Derya Baran’s lab. Waste heat emitted by machines and devices could be recaptured by thermoelectric materials. These substances have a property that means that when one side of the material is hot and the other is cold, an electric charge builds up along the temperature gradient.

Until now, thermoelectric materials have been made using expensive and energy-intensive processes. Baran, Nugraha and their colleagues have developed a new thermoelectric material made by spin coating a liquid solution of nanomaterials called quantum dots.

The team spin coated a thin layer of lead-sulphide quantum dots on a surface and then added a solution of short linker ligands that crosslink the quantum dots together to enhance the material’s electronic properties.

After repeating the spin-coating process layer by layer to form a 200-nanometer-thick film, gentle thermal annealing dried the film and completed fabrication. “Thermoelectric research has focused on materials processed at very high temperatures, above 400 degrees Celsius,” Nugraha says. The quantum-dot-based thermoelectric material is only heated up to 175 degrees Celsius. This lower processing temperature could cut production costs and means that thermoelectric devices could be formed on a broad range of surfaces, including cheap flexible plastics.

The team’s material showed promising thermoelectric properties. One important parameter of a good thermoelectric is the Seebeck coefficient, which corresponds to the voltage generated when a temperature gradient is applied. “We found some key factors leading to the enhanced Seebeck coefficient in our materials,” Nugraha says.

The team was also able to show that an effect called the quantum confinement, which alters a material’s electronic properties when it is shrunk to the nanoscale, was important for enhancing the Seebeck coefficient. The discovery is a step toward practical high-performance, low-temperature, solution-processed thermoelectric generators, Nugraha says.

Here’s a link to and a citation for the paper,

Low‐Temperature‐Processed Colloidal Quantum Dots as Building Blocks for Thermoelectrics by Mohamad I. Nugraha, Hyunho Kim, Bin Sun, Md Azimul Haque, Francisco Pelayo Garcia de Arquer, Diego Rosas Villalva, Abdulrahman El‐Labban, Edward H. Sargent, Husam N. Alshareef, Derya Baran. Advanced Energy Materials Volume 9, Issue 13 1803049 April 4, 2019 DOI: https://doi.org/10.1002/aenm.201803049 First published [online]: 14 February 2019

This paper is behind a paywall.

Smartphone as augmented reality system with software from Brown University

You need to see this,

Amazing, eh? The researchers are scheduled to present this work sometime this week at the ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US, from October 20-23, 2019.

Here’s more about ‘Portal-ble’ in an October 16, 2019 news item on ScienceDaily,

A new software system developed by Brown University [US] researchers turns cell phones into augmented reality portals, enabling users to place virtual building blocks, furniture and other objects into real-world backdrops, and use their hands to manipulate those objects as if they were really there.

The developers hope the new system, called Portal-ble, could be a tool for artists, designers, game developers and others to experiment with augmented reality (AR). The team will present the work later this month at the ACM Symposium on User Interface Software and Technology (UIST 2019) in New Orleans. The source code for Andriod is freely available for download on the researchers’ website, and iPhone code will follow soon.

“AR is going to be a great new mode of interaction,” said Jeff Huang, an assistant professor of computer science at Brown who developed the system with his students. “We wanted to make something that made AR portable so that people could use anywhere without any bulky headsets. We also wanted people to be able to interact with the virtual world in a natural way using their hands.”

An October 16, 2019 Brown University news release (also on EurekAlert), which originated the news item, provides more detail,

Huang said the idea for Portal-ble’s “hands-on” interaction grew out of some frustration with AR apps like Pokemon GO. AR apps use smartphones to place virtual objects (like Pokemon characters) into real-world scenes, but interacting with those objects requires users to swipe on the screen.

“Swiping just wasn’t a satisfying way of interacting,” Huang said. “In the real world, we interact with objects with our hands. We turn doorknobs, pick things up and throw things. So we thought manipulating virtual objects by hand would be much more powerful than swiping. That’s what’s different about Portal-ble.”

The platform makes use of a small infrared sensor mounted on the back of a phone. The sensor tracks the position of people’s hands in relation to virtual objects, enabling users to pick objects up, turn them, stack them or drop them. It also lets people use their hands to virtually “paint” onto real-world backdrops. As a demonstration, Huang and his students used the system to paint a virtual garden into a green space on Brown’s College Hill campus.

Huang says the main technical contribution of the work was developing the right accommodations and feedback tools to enable people to interact intuitively with virtual objects.

“It turns out that picking up a virtual object is really hard if you try to apply real-world physics,” Huang said. “People try to grab in the wrong place, or they put their fingers through the objects. So we had to observe how people tried to interact with these objects and then make our system able accommodate those tendencies.”

To do that, Huang enlisted students in a class he was teaching to come up with tasks they might want to do in the AR world — stacking a set of blocks, for example. The students then asked other people to try performing those tasks using Portal-ble, while recording what people were able to do and what they couldn’t. They could then adjust the system’s physics and user interface to make interactions more successful.

“It’s a little like what happens when people draw lines in Photoshop,” Huang said. “The lines people draw are never perfect, but the program can smooth them out and make them perfectly straight. Those were the kinds of accommodations we were trying to make with these virtual objects.”

The team also added sensory feedback — visual highlights on objects and phone vibrations — to make interactions easier. Huang said he was somewhat surprised that phone vibrations helped users to interact. Users feel the vibrations in the hand they’re using to hold the phone, not in the hand that’s actually grabbing for the virtual object. Still, Huang said, vibration feedback still helped users to more successfully interact with objects.

In follow-up studies, users reported that the accommodations and feedback used by the system made tasks significantly easier, less time-consuming and more satisfying.

Huang and his students plan to continue working with Portal-ble — expanding its object library, refining interactions and developing new activities. They also hope to streamline the system to make it run entirely on a phone. Currently the infrared sensor requires an infrared sensor and external compute stick for extra processing power.

Huang hopes people will download the freely available source code and try it for themselves. 
“We really just want to put this out there and see what people do with it,” he said. “The code is on our website for people to download, edit and build off of. It will be interesting to see what people do with it.

Co-authors on the research paper were Jing Qian, Jiaju Ma, Xiangyu Li, Benjamin Attal, Haoming Lai, James Tompkin and John Hughes. The work was supported by the National Science Foundation (IIS-1552663) and by a gift from Pixar.

You can find the conference paper here on jeffhuang.com,

Portal-ble: Intuitive Free-hand Manipulationin Unbounded Smartphone-based Augmented Reality by Jing Qian, Jiaju Ma, Xiangyu Li∗, Benjamin Attal, Haoming Lai,James Tompkin, John F. Hughes, Jeff Huang. Brown University, Providence RI, USA; Southeast University, Nanjing, China. Presented at ACM Symposium on User Interface Software and Technology (UIST) being held in New Orleans, US

This is the first time I’ve seen an augmented reality system that seems accessible, i.e., affordable. You can find out more on the Portal-ble ‘resource’ page where you’ll also find a link to the source code repository. The researchers, as noted in the news release, have an Android version available now with an iPhone version to be released in the future.

Sonifying proteins to make music and brand new proteins

Markus Buehler at the Massachusetts Institute of Technology (MIT) has been working with music and science for a number of years. My December 9, 2011 posting, Music, math, and spiderwebs, was the first one here featuring his work. My November 28, 2012 posting, Producing stronger silk musically, was a followup to Buehler’s previous work.

A June 28, 2019 news item on Azonano provides a recent update,

Composers string notes of different pitch and duration together to create music. Similarly, cells join amino acids with different characteristics together to make proteins.

Now, researchers have bridged these two seemingly disparate processes by translating protein sequences into musical compositions and then using artificial intelligence to convert the sounds into brand-new proteins. …

Caption: Researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature. Credit: Zhao Qin and Francisco Martin-Martinez

A June 26, 2019 American Chemical Society (ACS) news release, which originated the news item, provides more detail and a video,

To make proteins, cellular structures called ribosomes add one of 20 different amino acids to a growing chain in combinations specified by the genetic blueprint. The properties of the amino acids and the complex shapes into which the resulting proteins fold determine how the molecule will work in the body. To better understand a protein’s architecture, and possibly design new ones with desired features, Markus Buehler and colleagues wanted to find a way to translate a protein’s amino acid sequence into music.

The researchers transposed the unique natural vibrational frequencies of each amino acid into sound frequencies that humans can hear. In this way, they generated a scale consisting of 20 unique tones. Unlike musical notes, however, each amino acid tone consisted of the overlay of many different frequencies –– similar to a chord. Buehler and colleagues then translated several proteins into audio compositions, with the duration of each tone specified by the different 3D structures that make up the molecule. Finally, the researchers used artificial intelligence to recognize specific musical patterns that corresponded to certain protein architectures. The computer then generated scores and translated them into new-to-nature proteins. In addition to being a tool for protein design and for investigating disease mutations, the method could be helpful for explaining protein structure to broad audiences, the researchers say. They even developed an Android app [Amino Acid Synthesizer] to allow people to create their own bio-based musical compositions.

Here’s the ACS video,

A June 26, 2019 MIT news release (also on EurekAlert) provides some specifics and the MIT news release includes two embedded audio files,

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

Here’s a link to and a citation for the paper,

A Self-Consistent Sonification Method to Translate Amino Acid Sequences into Musical Compositions and Application in Protein Design Using Artificial Intelligence by Chi-Hua Yu, Zhao Qin, Francisco J. Martin-Martinez, Markus J. Buehler. ACS Nano 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsnano.9b02180 Publication Date:June 26, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

ETA October 23, 2019 1000 hours: Ooops! I almost forgot the link to the Aminot Acid Synthesizer.

Toronto, Sidewalk Labs, smart cities, and timber

The ‘smart city’ initiatives continue to fascinate. During the summer, Toronto’s efforts were described in a June 24, 2019 article by Katharine Schwab for Fast Company (Note: Links have been removed),

Today, Google sister company Sidewalk Labs released a draft of its master plan to transform 12 acres on the Toronto waterfront into a smart city. The document details the neighborhood’s buildings, street design, transportation, and digital infrastructure—as well as how the company plans to construct it.

When a leaked copy of the plan popped up online earlier this year, we learned that Sidewalk Labs plans to build the entire development, called Quayside, out of mass timber. But today’s release of the official plan reveals the key to doing so: Sidewalk proposes investing $80 million to build a timber factory and supply chain that would support its fully timber neighborhood. The company says the factory, which would be focused on manufacturing prefabricated building pieces that could then be assembled into fully modular buildings on site, could reduce building time by 35% compared to more traditional building methods.

“We would fund the creation of [a factory] somewhere in the greater Toronto area that we think could play a role in catalyzing a new industry around mass timber,” says Sidewalk Labs CEO and chairman Dan Doctoroff.

However, the funding of the factory is dependent on Sidewalk Labs being able to expand its development plan to the entire riverfront district. .. [emphasis mine].

Here’s where I think it gets very interesting,

Sidewalk proposes sourcing spruce and fir trees from the forests in Ontario, Quebec, and British Columbia. While Canada has 40% of the world’s sustainable forests, Sidewalk claims, the country has few factories that can turn these trees into the building material. That’s why the company proposes starting a factory to process two kinds of mass timber: Cross-laminated timber (CLT) and glulam beams. The latter is meant specifically to bear the weight of the 30-story buildings Sidewalk hopes to build. While Sidewalk says that 84% of the larger district would be handed over for development by local companies, the plan requires that these companies uphold the same sustainability standards when it comes to performance

Sidewalk says companies wouldn’t be required to build with CLT and glulam, but since the company’s reason for building the mass timber factory is that there aren’t many existing manufacturers to meet the needs for a full-scale development, the company’s plan might ultimately push any third-party developers toward using its [Google] factory to source materials. … [emphasis mine]

If I understand this rightly, Google wants to expand its plan to Toronto’s entire waterfront to make building a factory to produce the type of wood products Google wants to use in its Quayside development financially feasible (profitable). And somehow, local developers will not be forced to build the sames kinds of structures although Google will be managing the entire waterfront development. Hmmm.

Let’s take a look at one of Google’s other ‘city ventures’.

Louisville, Kentucky

First, Alphabet is the name of Google’s parent company and it was Alphabet that offered the city of Louisville an opportunity for cheap, abundant internet service known as Google Fiber. From a May 6, 2019 article by Alex Correa for the The Edge (Note: Links have been removed),

In 2015, Alphabet chose several cities in Kentucky to host its Google Fiber project. Google Fiber is a service providing broadband internet and IPTV directly to a number of locations, and the initiative in Kentucky … . The tech giant dug up city streets to bury fibre optic cables of their own, touting a new technique that would only require the cables to be a few inches beneath the surface. However, after two years of delays and negotiations after the announcement, Google abandoned the project in Louisville, Kentucky.

Like an unwanted pest in a garden, sign of Google’s presence can be seen and felt in the city streets. Metro Councilman Brandon Coan criticized the state of the city’s infrastructure, pointing out that strands of errant, tar-like sealant, used to cover up the cables, are “everywhere.” Speaking outside of a Louisville coffee shop that ran Google Fiber lines before the departure, he said, “I’m confident that Google and the city are going to negotiate a deal… to restore the roads to as good a condition as they were when they got here. Frankly, I think they owe us more than that.”

Google’s disappearance did more than just damage roads [emphasis mine] in Louisville. Plans for promising projects were abandoned, including transformative economic development that could have provided the population with new jobs and vastly different career opportunities than what was available. Add to that the fact that media coverage of the aborted initiative cast Louisville as the site of a failed experiment, creating an impression of the city as an embarrassment. (Google has since announced plans to reimburse the city $3.84 million over 20 months to help repair the damage to the city’s streets and infrastructure.)

A February 22, 2019 article on CBC (Canadian Broadcasting Corporation) Radio news online offers images of the damaged roadways and a particle transcript of a Day 6 radio show hosted by Brent Bambury,

Shortly after it was installed, the sealant on the trenches Google Fiber cut into Louisville roads popped out. (WDRB Louisville) Courtesy: CBC Radio Day 6

Google’s Sidewalk Labs is facing increased pushback to its proposal to build a futuristic neighbourhood in Toronto, after leaked documents revealed the company’s plans are more ambitious than the public had realized.

One particular proposal — which would see Sidewalk Labs taking a cut of property taxes in exchange for building a light rail transit line along Toronto’s waterfront — is especially controversial.

The company has developed an impressive list of promises for its proposed neighbourhood, including mobile pre-built buildings and office towers that tailor themselves to occupants’ behaviour.

But Louisville, Kentucky-based business reporter Chris Otts says that when Google companies come to town, it doesn’t always end well.

What was the promise Google Fiber made to Louisville back in 2015?

Well, it was just to be included as one of their Fiber cities, which was a pretty serious deal for Louisville at the time. A big coup for the mayor, and his administration had been working for years to get Google to consider adding Louisville to that list.

So if the city was eager, what sorts of accommodations were made for Google to entice them to come to Louisville?

Basically, the city did everything it could from a streamlining red tape perspective to get Google here … in terms of, you know, awarding them a franchise, and allowing them to be in the rights of way with this innovative technique they had for burying their cables here.
And then also, they [the city] passed a policy, which, to be sure, they say is just good policy regardless of Google’s support for it. But it had to do with how new Internet companies like Google can access utility poles to install their networks.

And Louisville ended up spending hundreds of thousands of dollars to defend that new policy in court in lawsuits by AT&T and by the traditional cable company here.

When Google Fiber starts doing business, they’re offering cheaper high speed Internet access, and they start burying these cables in the ground.

When did things start to go sideways for this project?

I don’t know if I would say ‘almost immediately,’ but certainly the problems were evident fairly quickly.

So they started their work in 2017. If you picture it, [in] the streets you can see on either side there are these seams. They look like little strings … near the end of the streets on both sides. And there are cuts in the street where they buried the cable and they topped it off with this sealant

And fairly early on — within months, I would say, of them doing that — you could see the sealant popping out. The conduit in there [was] visible or exposed. And so it was fairly evident that there were problems with it pretty quickly

Was this the first time that they had used this system and the sealant that you’re describing?

It was the first time, according to them, that they had used such shallow trenches in the streets.

So these are as shallow as two inches below the pavement surface that they’d bury these cables. It’s the ultra-shallow version of this technique.

And what explanation did Google Fiber offer for their decision to leave Louisville?

That it was basically a business decision; that they were trying this construction method to see if it was sustainable and they just had too many problems with it.

And as they said directly in their … written statement about this, they decided that instead of doing things right and starting over, which they would have to do essentially to keep providing service in Louisville, that it was the better business decision for them to just pick up and leave.

Toronto’s Sidewalk Labs isn’t Google Fiber — but they’re both owned by Google’s parent company, Alphabet.

If Louisville could give Toronto a piece of advice about welcoming a Google infrastructure project to town, what do you think that advice would be?

The biggest lesson from this is that one day they can be next to you at the press conference saying what a great city you are and how happy they are to … provide new service in your market, and then the next day, with almost no notice, they can say, “You know what? This doesn’t make sense for us anymore. And by the way, see ya. Thanks for having us. Sorry it didn’t work out.”

Google’s promises to Toronto

Getting back to Katharine Schwab’s June 24, 2019 fast Company article,

The factory is also key to another of Sidewalk’s promises: Jobs. According to Sidewalk, the factory itself would create 2,500 jobs [emphasis mine] along the entire supply chain over a 20-year period. But even if the Canadian government approves Sidewalk’s plan and commits to building out the entire waterfront district to take advantage of the mass timber factory’s economies of scale, there are other regulatory hurdles to overcome. Right now, the building code in Toronto doesn’t allow for timber buildings over six stories tall. All of Sidewalk’s proposed buildings are over six stories, and many of them go up to 30 stories. Doctoroff said he was optimistic that the company will be able to get regulations changed if the city decides to adopt the plan. There are several examples of timber buildings that are already under construction, with a planned skyscraper in Japan that will be 70 stories.

Sidewalk’s proposal is the result of 18 months of planning, which involved getting feedback from community members and prototyping elements like a building raincoat that the company hopes to include in the final development. It has come under fire from privacy advocates in particular, and the Canadian government is currently facing a lawsuit from a civil liberties group over its decision to allow a corporation to propose public privacy governance standards.

Now that the company has released the plan, it will be up to the Canadian government to decide whether to move forward. And the mass timber factory, in particular, will be dependent on the government adopting Sidewalk’s plan wholesale, far beyond the Quayside development—a reminder that Sidewalk is a corporation that’s here to make money, dangling investment dollars in front of the government to incentivize it to embrace Sidewalk as the developer for the entire area.

A few thoughts

Those folks in Louisville made a lot of accommodations for Google only to have the company abandon them. They will get some money in compensation, finally, but it doesn’t make up for the lost jobs and the national, if not international, loss of face.

I would think that should things go wrong, Google would do exactly the same thing to Toronto. As for the $80M promise, here’s exactly how it’s phrased in the June 24, 2019 Sidewalk Labs news release,

… Together with local partners, Sidewalk proposes to invest up to $80 million in a mass timber factory in Ontario to jumpstart this emerging industry.

So, Alphabet/Google/Sidewalk has proposed up to an $80M investment—with local partners. I wonder how much this factory is supposed to cost and what kinds of accommodations Alphabet/Google/Sidewalk will demand. Possibilities include policy changes, changes in municipal bylaws, and government money. In other words, Canadian taxpayers could end up footing part of the bill and/or local developers could be required to cover and outsize percentage of the costs for the factory as they jockey for the opportunity to develop part of Toronto’s waterfront.

Other than Louisville, what’s the company’s track record with regard to its partnerships with cities and municipalities? I Haven’t found any success stories in my admittedly brief search. Unusually, the company doesn’t seem to be promoting any of its successful city partnerships.

Smart city

While my focus has been on the company’s failure with Louisville and the possible dangers inherent to Toronto in a partnership with this company, it shouldn’t be forgotten that all of this development is in the name of a ‘smart’ city and that means data-driven. My March 28, 2018 posting features some of the issues with the technology, 5G, that will be needed to make cities ‘smart’. There’s also my March 20, 2018 posting (scroll down about 30% of the way) which looks at ‘smart’ cities in Canada with a special emphasis on Vancouver.

You may want to check out David Skok’s February 15, 2019 Maclean’s article (Cracks in the Sidewalk) for a Torontonian’s perspective.

Should you wish to do some delving yourself, there’s Sidewalk Labs website here and a June 24, 2019 article by Matt McFarland for CNN detailing some of the latest news about the backlash in Toronto concerning Sidewalk Labs.

A September 2019 update

Waterfront Toronto’s Digital Strategy Advisory Panel (DSAP) submitted a report to Google in August 2019 which was subsequently published as of September 10, 2019. To sum it up, the panel was not impressed with Google’s June 2019 draft master plan. From a September 11, 2019 news item on the Guardian (Note: Links have been removed),

A controversial smart city development in Canada has hit another roadblock after an oversight panel called key aspects of the proposal “irrelevant”, “unnecessary” and “frustratingly abstract” in a new report.

The project on Toronto’s waterfront, dubbed Quayside, is a partnership between the city and Google’s sister company Sidewalk Labs. It promises “raincoats” for buildings, autonomous vehicles and cutting-edge wood-frame towers, but has faced numerous criticisms in recent months.

A September 11, 2019 article by Ian Bick of Canadian Press published on the CBC (Canadian Broadcasting Corporation) website offers more detail,

Preliminary commentary from Waterfront Toronto’s digital strategy advisory panel (DSAP) released Tuesday said the plan from Google’s sister company Sidewalk is “frustratingly abstract” and that some of the innovations proposed were “irrelevant or unnecessary.”

“The document is somewhat unwieldy and repetitive, spreads discussions of topics across multiple volumes, and is overly focused on the ‘what’ rather than the ‘how,’ ” said the report on the panel’s comments.

Some on the 15-member panel, an arm’s-length body that gives expert advice to Waterfront Toronto, have also found the scope of the proposal to be unclear or “concerning.”

The report says that some members also felt the official Sidewalk plan did not appear to put the citizen at the centre of the design process for digital innovations, and raised issues with the way Sidewalk has proposed to manage data that is generated from the neighbourhood.

The panel’s early report is not official commentary from Waterfront Toronto, the multi-government body that is overseeing the Quayside development, but is meant to indicate areas that needs improvement.

The panel, chaired by University of Ottawa law professor Michael Geist, includes executives, professors, and other experts on technology, privacy, and innovation.

Sidewalk Labs spokeswoman Keerthana Rang said the company appreciates the feedback and already intends to release more details in October on the digital innovations it hopes to implement at Quayside.

I haven’t been able to find the response to DSAP’s September 2019 critique but I did find this Toronto Sidewalk Labs report, Responsible Data Use Assessment Summary :Overview of Collab dated October 16, 2019. Of course, there’s still another 10 days before October 2019 is past.