Tag Archives: Israel

Digital aromas? And a potpourri of ‘scents and sensibility’

Mmm… smelly books. Illustration by Dorothy Woodend.[downloaded from https://thetyee.ca/Culture/2020/11/19/Smell-More-Important-Than-Ever/]

I don’t get to post about scent as often as I would like, although I have some pretty interesting items here, those links to follow towards of this post).

Digital aromas

This Nov. 11, 2020 Weizmann Institute of Science press release (also on EurekAlert published on Nov. 19, 2020) from Israel gladdened me,

Fragrances – promising mystery, intrigue and forbidden thrills – are blended by master perfumers, their recipes kept secret. In a new study on the sense of smell, Weizmann Institute of Science researchers have managed to strip much of the mystery from even complex blends of odorants, not by uncovering their secret ingredients, but by recording and mapping how they are perceived. The scientists can now predict how any complex odorant will smell from its molecular structure alone. This study may not only revolutionize the closed world of perfumery, but eventually lead to the ability to digitize and reproduce smells on command. The proposed framework for odors, created by neurobiologists, computer scientists, and a master-perfumer, and funded by a European initiative [NanoSmell] for Future Emerging Technologies (FET-OPEN), was published in Nature.

“The challenge of plotting smells in an organized and logical manner was first proposed by Alexander Graham Bell [emphasis mine] over 100 years ago,” says Prof. Noam Sobel of the Institute’s Neurobiology Department. Bell threw down the gauntlet: “We have very many different kinds of smells, all the way from the odor of violets [emphasis mine] and roses up to asafoetida. But until you can measure their likenesses and differences you can have no science of odor.” This challenge had remained unresolved until now.

This century-old challenge indeed highlighted the difficulty in fitting odors into a logical system: There are millions of odor receptors in our noses, consisting hundreds of different subtypes, each shaped to detect particular molecular features. Our brains potentially perceive millions of smells in which these single molecules are mixed and blended at varying intensities. Thus, mapping this information has been a challenge. But Sobel and his colleagues, led by graduate student Aharon Ravia and Dr. Kobi Snitz, found there is an underlying order to odors. They reached this conclusion by adopting Bell’s concept – namely to describe not the smells themselves, but rather the relationships between smells as they are perceived.

In a series of experiments, the team presented volunteer participants with pairs of smells and asked them to rate these smells on how similar the two seemed to one another, ranking the pairs on a similarity scale ranging from “identical” to “extremely different.” In the initial experiment, the team created 14 aromatic blends, each made of about 10 molecular components, and presented them two at a time to nearly 200 volunteers, so that by the end of the experiment each volunteer had evaluated 95 pairs.

To translate the resulting database of thousands of reported perceptual similarity ratings into a useful layout, the team refined a physicochemical measure they had previously developed. In this calculation, each odorant is represented by a single vector that combines 21 physical measures (polarity, molecular weight, etc.). To compare two odorants, each represented by a vector, the angle between the vectors is taken to reflect the perceptual similarity between them. A pair of odorants with a low angle distance between them are predicted similar, those with high angle distance between them are predicted different.

To test this model, the team first applied it to data collected by others, primarily a large study in odor discrimination by Bushdid [C. Bushdid] and colleagues from the lab of Prof. Leslie Vosshall at the Rockefeller Institute in New York. The Weizmann team found that their model and measurements accurately predicted the Bushdid results: Odorants with low angle distance between them were hard to discriminate; odors with high angle distance between them were easy to discriminate. Encouraged by the model accurately predicting data collected by others, the team continued to test for themselves.

The team concocted new scents and invited a fresh group of volunteers to smell them, again using their method to predict how this set of participants would rate the pairs – at first 14 new blends and then, in the next experiment, 100 blends. The model performed exceptionally well. In fact, the results were in the same ballpark as those for color perception – sensory information that is grounded in well-defined parameters. This was especially surprising considering each individual likely has a unique complement of smell receptor subtypes, which can vary by as much as 30% across individuals.

Because the “smell map,” [emphasis mine] or “metric” predicts the similarity of any two odorants, it can also be used to predict how an odorant will ultimately smell. For example, any novel odorant that is within 0.05 radians or less from banana will smell exactly like banana. As the novel odorant gains distance from banana, it will smell banana-ish, and beyond a certain distance, it will stop resembling banana.

The team is now developing a web-based tool. This set of tools not only predicts how a novel odorant will smell, but can also synthesize odorants by design. For example, one can take any perfume with a known set of ingredients, and using the map and metric, generate a new perfume with no components in common with the original perfume, but with exactly the same smell. Such creations in color vision, namely non-overlapping spectral compositions that generate the same perceived color, are called color metamers, and here the team generated olfactory metamers.

The study’s findings are a significant step toward realizing a vision of Prof. David Harel of the Computer and Applied Mathematics Department, who also serves as Vice President of the Israel Academy of Sciences and Humanities and who was a co-author of the study: Enabling computers to digitize and reproduce smells. In addition, of course, to being able to add realistic flower or sea aromas to your vacation pictures on social media, giving computers the ability to interpret odors in the way that humans do could have an impact on environmental monitoring and the biomedical and food industries, to name a few. Still, master perfumer Christophe Laudamiel, who is also a co-author of the study, remarks that he is not concerned for his profession just yet.

Sobel concludes: “100 years ago, Alexander Graham Bell posed a challenge. We have now answered it: The distance between rose and violet is 0.202 radians (they are remotely similar), the distance between violet and asafoetida is 0.5 radians (they are very different), and the difference between rose and asafoetida is 0.565 radians (they are even more different). We have converted odor percepts into numbers, and this should indeed advance the science of odor.”

I emphasized Alexander Graham Bell and the ‘smell map’ because I thought they were interesting and violets because they will be mentioned again later in this post.

Meanwhile, here’s a link to and a citation for the paper (the proposed framework for odors),

A measure of smell enables the creation of olfactory metamers by Aharon Ravia, Kobi Snitz, Danielle Honigstein, Maya Finkel, Rotem Zirler, Ofer Perl, Lavi Secundo, Christophe Laudamiel, David Harel & Noam Sobel. Nature volume 588, pages 118–123 (2020) DOI: https://doi.org/10.1038/s41586-020-2891-7 Published online: 11 November 2020 Journal Issue Date: 03 December 2020

This paper is behind a paywall.

Smelling like an old book

Some folks are missing the smell of bookstores and according to Dorothy Woodend’s Nov. 19, 2020 article for The Tyee, that longing has resulted in a perfume (Note: Links have been removed),

The news that Powell’s Books, Portland’s (Oregon, US) beloved bookstore, had released a signature scent was greeted with bemusement by some, confusion by others. But to me it made perfect scents. (Err, sense.) If you love something, I mean really love it, you love the way it smells.

Old books have a distinctive peppery aroma that draws bibliophiles like bears to honey. Some people are very specific about their book smells, preferring vintage Penguin paperbacks from the mid to late 1960s. Those orange spines aged like fine wine.

Powell’s created the scent after people complained about missing the smell of the store during lockdown. It got me thinking about how identity is often bound up with smell and, more widely, how smells belong to cultural, even historic moments.

Olfactory obsolescence can have weird side effects … . Memories of one’s grandfather smelling like pipe tobacco are pretty much now only a literary conceit. But pipe smoke isn’t the only dinosaur smell that is going extinct. Even in my lifetime, I remember the particular aroma of baseball cards and chalk dust.

Remember violets? Here’s more about Powell’s Unisex Fragrance (from Powell’s purchase webpage),

Notes:
• Wood
• Violet
• Biblichor

Description:
Like the crimson rhododendrons in Rebecca, the heady fragrance of old paper creates an atmosphere ripe with mood and possibility. Invoking a labyrinth of books; secret libraries; ancient scrolls; and cognac swilled by philosopher-kings, Powell’s by Powell’s delivers the wearer to a place of wonder, discovery, and magic heretofore only known in literature.

How to wear:
This scent contains the lives of countless heroes and heroines. Apply to the pulse points when seeking sensory succor or a brush with immortality.

Details:
• 1 ounce
• Glass bottle
• Limited-edition item available while supplies last

Shipping details:
Powell’s Unisex Fragrance ships separately and only in the contiguous United States [emphasis mine]. Special shipping rates apply.

Links: oPhone and heritage smells

Some years I was quite intrigued by the oPhone (scent by telephone) and wrote these: For the smell of it, a Feb. 14, 2014 posting, and Smelling Paris in New York (update on the oPhone), a June 18, 2014 posting. I haven’t found any updates about oPhone in my brief searches on the web.

There was a previous NANOSMELL (sigh, these projects have various approaches to capitalization) posting: Scented video games: a nanotechnology project in Europe published here in a May 27, 2016 posting.

More recently on the smell front, there was this May 22, 2017 posting, Preserving heritage smells (scents). FYI, the authors of the 2017 paper are part of the Odeuropa project described in the next subsection.

Context: NanoSmell and Odeuropa

Science funding is intimately linked to science policy. Examination of science funding can be useful for understanding some of the contrasts between how science is conducted in different jurisdictions, e.g., Europe and Canada.

Before launching into the two ‘scent’ projects, NanoSmell and Odeuropa, I’m offering a brief description of one of the European Union’s (EU) most comprehensive and substantive (many, many Euros) science funding initiatives.The latest iteration of this initiative has funded and is funding both NanoSmell and Odeuropa.

Horizon Europe

The initiative has gone under different names: Framework Programmes 1-7, then in 2014, it was called Horizon 2020 with its end date part of its name. The latest initiative, Horizon Europe is destined to start in 2021 and end in 2027.

The most recent Horizon Europe budget information I’ve been able to find is in this Nov. 10, 2020 article by Éanna Kelly and Goda Naujokaitytė for ScienceBusiness.net,

EU governments and the European Parliament on Tuesday [Nov. 10, 2020] afternoon announced an extra €4 billion will be added to the EU’s 2021-2027 research budget, following one-and-a-half days of intense negotiations in Brussels.

The deal, which still requires a final nod from parliament and member states, puts Brussels closer to implementing its gigantic €1.8 trillion budget and COVID-19 recovery package. [emphasis mine]

In all, a series of EU programmes gained an additional €15 billion. Among them, the student exchange programme Erasmus+ went up by €2.2 billion, health spending in EU4Health by €3.4 billion, and the InvestEU programme got an additional €1 billion.

Parliamentarians have been fighting to reverse cuts [emphasis mine] made to science and other investment programmes since July [2020], when EU leaders settled on €80.9 billion (at 2018 prices) for Horizon Europe, significantly less than €94.4 billion proposed by the European Commission.

“I am really proud that we fought – all six of us as a team,” said van Overtveldt [Johan Van Overtveldt, Belgian MEP {member of European Parliament} on the budget committee], pointing to the other budget MEPs who headed talks with the German Presidency of the Council. “You can take the term ‘fight’ literally. We had to fight for what we got.”

“We are all very proud of what we achieved, not for the parliament’s pride but in the interest of European citizens short-term and long-term,” van Overveldt said.

One of the most visible campaigners for science in the Parliament, MEP Christian Ehler, spokesman on Horizon Europe for the European Peoples’ Party, called the deal “a victory for researchers, scientists and citizens alike.” [emphasis mine]

The challenge now for negotiators will be to figure out how to divide extra funds [emphasis mine] within Horizon Europe fairly, with officials attached to public-private partnerships, the European Research Council, the new research missions, and the European Innovation Council all baying for more cash.

To sum up, in July 2020, legislators settled on the figure of €80.9 billion for science funding over the seven year period of 2021 – 2027 to administered by Horizon Europe. After fighting €4 billion was added for a total of €84.9 billion in research funding over the next seven years.

This is fascinating to me; I don’t recall ever seeing any mention of Canadian legislators arguing over how much money should be allocated to research in articles about the Canadian budget. The usual approach is treat the announcement as a fait accompli and a matter for celebration or intense criticism.

Smell of money?

All this talk of budgets and heritage smells has me thinking about the ‘smell of money’. What happens as money or currency becomes virtual rather than actual? And, what happened to the smell of Canadian money which is now made of plastic?

I haven’t found any answers to those questions but I did find an interesting June 14, 2012 article by Sarah Gardner for Marketplace.org titled, Sniffing out what money smells like. The focus is on money made of cotton and linen. One other note, this is not the Canadian Broadcasting Corporation’s Marketplace television programme. This is a US programme from American Public Media (from the Markeplace.org FAQs webpage).

Now onto the funding for European smell research.

NanoSmell

The Israeli researchers’ work was funded by Horizon 2020’s NanoSmell project which ran from Sept. 1, 2015 – August 31, 2019 and this was their objective (from the CORDIS NanoSmell project page),

“Despite years of promise, an odor-emitting component in devices such as televisions, phones, computers and more has yet to be developed. Two major obstacles in the way of such development are poor understanding of the olfactory code (the link between odorant structure, neural activity, and odor perception), and technical inability to emit odors in a reversible manner. Here we propose a novel multidisciplinary path to solving this basic scientific question (the code), and in doing so generate a solution to the technical limitation (controlled odor emission). The Bachelet lab will design DNA strands that assume a 3D structure that will specifically bind to a single type of olfactory receptor and induce signal transduction. These DNA-based “”artificial odorants”” will be tagged with a nanoparticle that changes their conformation in response to an external electromagnetic field. Thus, we will have in hand an artificial odorant that is remotely switchable. The Hansson lab will use tissue culture cells expressing insect olfactory receptors, functional imaging, and behavioral tests to validate the function and selectivity of these switchable odorants in insects. The Carleton lab will use imaging in order to investigate the patterns of neural activity induced by these artificial odorants in rodents. The Sobel lab will apply these artificial odorants to the human olfactory system, [emphasis mine] and measure perception and neural activity following switching the artificial smell on and off. Finally, given a potential role for olfactory receptors in skin, the Del Rio lab will test the efficacy of these artificial odorants in promoting wound healing. At the basic science level, this approach may allow solving the combinatorial code of olfaction. At the technology level, beyond novel pharmacology, we will provide proof-of-concept for countless novel applications ranging from insect pest-control to odor-controlled environments and odor-emitting devices such as televisions, phones, and computers.” [emphasis mine]

Unfortunately, I can’t find anything on the NanoSmell Project Results page with links to any proof-of-concept publications or pilot projects for the applications mentioned. Mind you, I wouldn’t have recognized the Israeli team’s A measure of smell enables the creation of olfactory metamers as a ‘smell map’.

Odeuropa

Remember the ‘heritage smells’ 2017 posting? The research paper listed there has two authors, both of whom form one of the groups (University College London; scroll down) associated with Odeuropa’s Horizon 2020 project announced in a Nov. 17, 2020 posting by the project lead, Inger Leemans on the Odeuropa website (Note: A link has been removed),

The Odeuropa consortium is very proud to announce that it has been awarded a €2.8M grant from the EU Horizon 2020 programme for the project, “ODEUROPA: Negotiating Olfactory and Sensory Experiences in Cultural Heritage Practice and Research”.Smell is an urgent topic which is fast gaining attention in different communities. Amongst the questions the Odeuropa project will focus on are: what are the key scents, fragrant spaces, and olfactory practices that have shaped our cultures? How can we extract sensory data from large-scale digital text and image collections? How can we represent smell in all its facets in a database? How should we safeguard our olfactory heritage? And — why should we? …

The project bundles an array of academic expertise from across many disciplines—history, art history, computational linguistics, computer vision, semantic web, museology, heritage science, and chemistry, with further expertise from cultural heritage institutes, intangible heritage organisations, policy makers, and the creative and fragrance industries.

I’m glad to see this interest in scent, heritage, communication, and more. Perhaps one day we’ll see similar interest here in Canada. Subtle does not mean unimportant, eh?

Branched flows of light look like trees say “explorers of experimental science” at Technion

Enhancing soap bubbles for your science explorations? It sounds like an entertaining activity you might give children for ‘painless’ science education. In this case, researchers at Technion – Israel Institute of Technology have made an exciting discovery, The following video is where I got the phrase “explorers of experimental science,”

A July 1, 2020 news item on Nanowerk announces the work (Note: A link has been removed),

A team of researchers from the Technion – Israel Institute of Technology has observed branched flow of light for the very first time. The findings are published in Nature and are featured on the cover of the July 2, 2020 issue (“Observation of branched flow of light”).

The study was carried out by Ph.D. student Anatoly (Tolik) Patsyk, in collaboration with Miguel A. Bandres, who was a postdoctoral fellow at Technion when the project started and is now an Assistant Professor at CREOL, College of Optics and Photonics, University of Central Florida. The research was led by Technion President Professor Uri Sivan and Distinguished Professor Mordechai (Moti) Segev of the Technion’s Physics and Electrical Engineering Faculties, the Solid State Institute, and the Russell Berrie Nanotechnology Institute.

A July 2, 2020 Technion press release, which originated the news item, delves further into the research,

When waves travel through landscapes that contain disturbances, they naturally scatter, often in all directions. Scattering of light is a natural phenomenon, found in many places in nature. For example, the scattering of light is the reason for the blue color of the sky. As it turns out, when the length over which disturbances vary is much larger than the wavelength, the wave scatters in an unusual fashion: it forms channels (branches) of enhanced intensity that continue to divide or branch out, as the wave propagates.  This phenomenon is known as branched flow. It was first observed in 2001 in electrons and had been suggested to be ubiquitous and occur also for all waves in nature, for example – sound waves and even ocean waves. Now, Technion researchers are bringing branched flow to the domain of light: they have made an experimental observation of the branched flow of light.

“We always had the intention of finding something new, and we were eager to find it. It was not what we started looking for, but we kept looking and we found something far better,” says Asst. Prof. Miguel Bandres. “We are familiar with the fact that waves spread when they propagate in a homogeneous medium. But for other kinds of mediums, waves can behave in very different ways. When we have a disordered medium where the variations are not random but smooth, like a landscape of mountains and valleys, the waves will propagate in a peculiar way. They will form channels that keep dividing as the wave propagates, forming a beautiful pattern resembling the branches of a tree.” 

In their research, the team coupled a laser beam to a soap membrane, which contains random variations in membrane thickness. They discovered that when light propagates within the soap film, rather than being scattered, the light forms elongated branches, creating the branched flow phenomenon for light.

“In optics we usually work hard to make light stay focused and propagate as a collimated beam, but here the surprise is that the random structure of the soap film naturally caused the light to stay focused. It is another one of nature’s surprises,” says Tolik Patsyk. 

The ability to create branched flow in the field of optics offers new and exciting opportunities for investigating and understanding this universal wave phenomenon.

“There is nothing more exciting than discovering something new and this is the first demonstration of this phenomenon with light waves,” says Technion President Prof. Uri Sivan. “This goes to show that intriguing phenomena can also be observed in simple systems and one just has to be perceptive enough to uncover them. As such, bringing together and combining the views of researchers from different backgrounds and disciplines has led to some truly interesting insights.”

“The fact that we observe it with light waves opens remarkable new possibilities for research, starting with the fact that we can characterize the medium in which light propagates to very high precision and the fact that we can also follow those branches accurately and study their properties,” he adds.

Distinguished Prof. Moti Segev looks to the future. “I always educate my team to think beyond the horizon,” he says, “to think about something new, and at the same time – look at the experimental facts as they are, rather than try to adapt the experiments to meet some expected behavior. Here, Tolik was trying to measure something completely different and was surprised to see these light branches which he could not initially explain. He asked Miguel to join in the experiments, and together they upgraded the experiments considerably – to the level they could isolate the physics involved. That is when we started to understand what we see. It took more than a year until we understood that what we have is the strange phenomenon of “branched flow”, which at the time was never considered in the context of light waves. Now, with this observation – we can think of a plethora of new ideas. For example, using these light branches to control the fluidic flow in liquid, or to combine the soap with fluorescent material and cause the branches to become little lasers. Or to use the soap membranes as a platform for exploring fundamentals of waves, such as the transitions from ordinary scattering which is always diffusive, to branched flow, and subsequently to Anderson localization. There are many ways to continue this pioneering study. As we did many times in the past, we would like to boldly go where no one has gone before.” 

The project is now continuing in the laboratories of Profs. Segev and Sivan at Technion, and in parallel in the newly established lab of Prof. Miguel Bandres at UCF. 

Here’s a link to and a citation for the paper,

Observation of branched flow of light by Anatoly Patsyk, Uri Sivan, Mordechai Segev & Miguel A. Bandres Nature volume 583, pages60–65 (2020) DOI: https://doi.org/10.1038/s41586-020-2376-8 Published: 01 July 2020 Issue Date: 02 July 2020

This paper is behind a paywall.

Israeli startup (Nanomedic) and a ‘ray’ gun that shoots wound-healing skin

[downloaded from https://uploads.neatorama.com/images/posts/967/107/107967/Spray-on-Nanofiber-Skin-May-Improve-Burn-and-Wound-Care_0-x.jpg?v=10727]

Where I see a ‘ray’ gun, Rina Raphael, author of a July 6, 2019 article for Fast tCompany, sees a water pistol (Note: Links have been removed),

Imagine if bandaging looked a little more like, well, a water gun?

Israeli startup Nanomedic Technologies Ltd., a subsidiary of medical device company Nicast, has invented a new mechanical contraption to treat burns, wounds, and surgical injuries by mimicking human tissue. Shaped like a children’s toy, the lightweight SpinCare emits a proprietary nanofiber “second skin” that completely covers the area that needs to heal.

All one needs to do is aim, squeeze the two triggers, and fire off an electrospun polymer material that attaches to the skin.

The Nanomedic spray method avoids any need to come into direct contact with the wound. In that sense, it completely sidesteps painful routine bandage dressings. The transient skin then fully develops into a secure physical barrier with tough adherence. Once new skin is regenerated, usually between two to three weeks (depending on the individual’s heal time), the layer naturally peels off.

“You don’t replace it,” explains Nanomedic CEO Dr. Chen Barak. “You put it only once—on the day of application—and it remains there until it feels the new layer of skin healed.”

“It’s the same model as an espresso machine,” says Barak.

The SpinCare holds single-use ampoules containing Nanomedic’s polymer formulation. Once the capsule is firmly in place, one activates the device roughly eight inches towards the wound. Pressing the trigger activates the electron-spinning process, which sprays a web-like a layer of nano fibers directly on the wound.

The solution adjusts to the morphology of the wound, thereby creating a transient skin layer that imitates the skin structure’s human tissue. It’s a transparent, protective film that then allows the patient and doctor to monitor progress. Once the wound has healed and developed a new layer of skin, the SpinCare “bandage” falls off on its own.

The product is already being tested in hospitals. In the coming year, following FDA clearance, Nanomedic plans to expand to emergency rooms, ambulances, military use, and disaster relief response like fire truck companies. The global wound healing market is expected to hit $35 billion by 2025, according to a report by Transparency Market Research.

Nanomedic joins other researchers attempting to reimagine the wound healing process. Engineers at the University of Wisconsin-Madison, for example, created a new kind of protective bandage that sends a mild electrical stimulation, thereby “dramatically” reducing the time deep surgical wounds take to heal.

As for the the playful (yet functional) design, it resembles other medical tools utilizing the point-and-shoot feature. Researchers at the Technion-Israel Institute of Technology and Boston Children’s Hospital recently revealed a “hot-glue gun” that melds torn human tissues together. The medical glue is meant to replace painful and often scarring stitches and staples.

Down the line, Nanomedic plans to enter the in-home care market, where it believes it can better assist caretakers for treatment of chronic wounds, such as pressure ulcers. The chronic wounds segment is projected to hold the dominant share in the wound healing market due to aging populations.

But a bigger opportunity lies in the multiple uses the SpinCare can ultimately provide. It is, in essence, a platform technology that could benefit multiple categories, not just medical wound care. Currently, the SpinCare’s capsules do not contain any active ingredients.

Nanomedic is already researching how to add different additives, such as antibacterial compliments, collagen, silicone, cannabinoids—and, eventually, stem cells and cellular treatments.

Such advancements would propel the device to new markets, like plastic surgery, aesthetics, and dermatology. The latter, for example, spans “burns” caused by deep, cosmetic laser peels.

“Because it is a solution, we can combine additives inside,” explains Katz. “By that, we are transforming the transient skin into a drug delivery system and slow release system.”

Nanomedic is still at the premarket phase, [emphasis mine] having concluded one clinical trial related to the treatment of split graft donor site wounds and currently engaged in two ongoing burn studies. Barak anticipates FDA approval will take between nine to 12 months, during which the company will focus on building manufacturing lines and preparing for a European launch in early 2020.

According to the startup’s estimates, the product’s final price (not yet determined) will be far more affordable than traditional dressings. Nanomedic has raised $7 million in funding to date, including a grant by the EU’s Horizon 2020 SME Instrument program.

Barak believes Nanocare [sic] brings a highly cost-effective alternative to the healthcare system, but more than anything, she’s proud that SpinCare, above all else, mitigates patient pain and hassle. Some users, the company reports, are able to return to work and physical activity right away.

The Nanomedic website can be found here. The company has also produced a video featuring SpinCare,

There’s a bit more about the technology (I’m especially interested in the electrospinning) on Nanomedic’s Technology webpage,

Electrospinning technology allows the development of a wide range of products and devices, with tailored composition, geometry and morphology.

Almost any natural or synthetic polymer can be electrospun to create a nanofibrous mat. The intrinsic structure of the electrospun products, which mimics the natural extra cellular matrix (ECM), encourages quick and efficient tissue integration and minimizes medical complications.

Raphael’s article and the Nanomedic website offer more detail to what you can see in the excerpts provided here. If you have the time, I recommend checking out both.

Therapeutic nanoparticles for agricultural crops

Nanoscale drug delivery systems developed by the biomedical community may prove useful to farmers. The Canadian Broadcasting Corporation (CBC) featured the story in a May 26, 2018 online news item (with audio file; Note: A link has been removed),

Thanks to a fortuitous conversation between an Israeli chemical engineer who works on medical nanotechnology and his farmer friend, there’s a new way to deliver nourishment to nutrient-starved crops.

Avi Schroeder, the chemical engineer and cancer researcher from Technion — Israel Institute of Technology asked his friend what are the major problems facing agriculture today. “He said, ‘You know Avi, one of the major issues we’re facing is that in some of the crops we try to grow, we actually have a lack of nutrients. And then we end up not growing those crops even though they’re very valuable or very important crops.'”

This problem is only going to become more acute in many regions of the world as global population approaches eight billion people.

“Feeding them with healthy food and nutritious food is becoming a major limiting factor. And … the land we can actually grow crops on are also becoming smaller and smaller in every country because people need to build houses too. So what we want is to get actually more crops per hectare.”

The way farmers currently deliver nutrients to malnourished agricultural crops is very inefficient. Much of what is added to the leaves of the plant is wasted. Most of it washes away or isn’t taken up by the plants.

If plants don’t get the nutrients they need, their leaves start to yellow, their growth becomes stunted and they don’t produce as much food as nutrient-rich crops.

“We work primarily in the field of medicine,” says Schroeder. “What we do many times is we’ll load minuscule doses of medicine into nanoparticles — we’ll inject them into the patient. And those nanoparticles will actually be able to detect the disease site inside the body. That sounded very, very similar to the problem the farmers were actually facing — how do you get a medicine into a crop or a nutrient into a crop and get it to the right region within the crop where it’s actually necessary.”

The nanoparticles Schroeder developed are tiny packages that can deliver nutrients — any nutrients — that are placed inside.

A June 6, 2018 news item on Nanowerk offers a few more details,

An innovative technology developed at the Technion [Israel Institute of Technology] could lead to significant increases in agricultural yields. Using a nanometric transport platform on plants that was previously utilized for targeted drug delivery, researchers increased the penetration rate of nutrients into the plants, from 1% to approximately 33%.

A May 27,2018 Technion press release, which originated the news item, fleshes out the details,

The technology exploits nanoscale delivery platforms which until now were used to transport drugs to specific targets in the patient’s body. The work was published in Scientific Reports and will be presented in Nature Press.

The use of the nanotechnology for targeted drug delivery has been the focus of research activity conducted at the Laboratory for Targeted Drug Delivery and Personalized Medicine Technologies at the Wolfson Faculty of Chemical Engineering. The present research repurposes this technology for agricultural use; and is being pursued by laboratory director Prof. Avi Schroeder and graduate student Avishai Karny.

“The constant growth in the world population demands more efficient agricultural technologies, which will produce greater supplies of healthier foods and reduce environmental damage,” said Prof. Schroeder. “The present work provides a new means of delivering essential nutrients without harming the environment.”

The researchers loaded the nutrients into liposomes which are small spheres generated in the laboratory, comprised of a fatty outer layer enveloping the required nutrients. The particles are stable in the plant’s aqueous environment and can penetrate the cells. In addition, the Technion researchers can ‘program’ them to disintegrate and release the load at precisely the location and time of interest, namely, in the roots and leaves. Disintegration occurs in acidic environments or in response to an external signal, such as light waves or heat. The molecules comprising the particles are derived from soy plants and are therefore approved and safe for consumption by both humans and animals.

In the present experiment, the researchers used 100-nanometer liposomes to deliver the nutrients iron and magnesium into both young and adult tomato crops. They demonstrated that the liposomes, which were sprayed in the form of a solution onto the leaves, penetrated the leaves and reached other leaves and roots. Only when reaching the root cells did they disintegrate and release the nutrients. As said, the technology greatly increased the nutrient penetration rate.

In addition to demonstrating the effectivity of this approach as compared to the standard spray method, the researchers also assessed the regulatory limitations associated with the spread of volatile particles.

”Our engineered liposomes are only stable within a short spraying range of up to 2 meters,” explained Prof. Schroeder. “If they travel in the air beyond that distance, they break down into safe materials (phospholipids). We hope that the success of this study will expand the research and development of similar agricultural products, to increase the yield and quality of food crops.”

This is an illustration of the work,

Each liposome (light blue bubble) was loaded with iron and magnesium particles. The liposomes sprayed on the leaves, penetrated and then spread throughout the various parts of the plant and released their load within the cells. Courtesy: Technion

Here’s a link to and a citation for the paper,

Therapeutic nanoparticles penetrate leaves and deliver nutrients to agricultural crops by Avishai Karny, Assaf Zinger, Ashima Kajal, Janna Shainsky-Roitman, & Avi Schroeder. Scientific Reportsvolume 8, Article number: 7589 (2018) DOI: https://doi.org/10.1038/s41598-018-25197-y Published 17 May 2018

This paper is open access.

A potpourri of robot/AI stories: killers , kindergarten teachers, a Balenciaga-inspired AI fashion designer, a conversational android, and more

Following on my August 29, 2018 post (Sexbots, sexbot ethics, families, and marriage), I’m following up with a more general piece.

Robots, AI (artificial intelligence), and androids (humanoid robots), the terms can be confusing since there’s a tendency to use them interchangeably. Confession: I do it too, but, not this time. That said, I have multiple news bits.

Killer ‘bots and ethics

The U.S. military is already testing a Modular Advanced Armed Robotic System. Credit: Lance Cpl. Julien Rodarte, U.S. Marine Corps

That is a robot.

For the purposes of this posting, a robot is a piece of hardware which may or may not include an AI system and does not mimic a human or other biological organism such that you might, under circumstances, mistake the robot for a biological organism.

As for what precipitated this feature (in part), it seems there’s been a United Nations meeting in Geneva, Switzerland held from August 27 – 31, 2018 about war and the use of autonomous robots, i.e., robots equipped with AI systems and designed for independent action. BTW, it’s the not first meeting the UN has held on this topic.

Bonnie Docherty, lecturer on law and associate director of armed conflict and civilian protection, international human rights clinic, Harvard Law School, has written an August 21, 2018 essay on The Conversation (also on phys.org) describing the history and the current rules around the conduct of war, as well as, outlining the issues with the military use of autonomous robots (Note: Links have been removed),

When drafting a treaty on the laws of war at the end of the 19th century, diplomats could not foresee the future of weapons development. But they did adopt a legal and moral standard for judging new technology not covered by existing treaty language.

This standard, known as the Martens Clause, has survived generations of international humanitarian law and gained renewed relevance in a world where autonomous weapons are on the brink of making their own determinations about whom to shoot and when. The Martens Clause calls on countries not to use weapons that depart “from the principles of humanity and from the dictates of public conscience.”

I was the lead author of a new report by Human Rights Watch and the Harvard Law School International Human Rights Clinic that explains why fully autonomous weapons would run counter to the principles of humanity and the dictates of public conscience. We found that to comply with the Martens Clause, countries should adopt a treaty banning the development, production and use of these weapons.

Representatives of more than 70 nations will gather from August 27 to 31 [2018] at the United Nations in Geneva to debate how to address the problems with what they call lethal autonomous weapon systems. These countries, which are parties to the Convention on Conventional Weapons, have discussed the issue for five years. My co-authors and I believe it is time they took action and agreed to start negotiating a ban next year.

Docherty elaborates on her points (Note: A link has been removed),

The Martens Clause provides a baseline of protection for civilians and soldiers in the absence of specific treaty law. The clause also sets out a standard for evaluating new situations and technologies that were not previously envisioned.

Fully autonomous weapons, sometimes called “killer robots,” would select and engage targets without meaningful human control. They would be a dangerous step beyond current armed drones because there would be no human in the loop to determine when to fire and at what target. Although fully autonomous weapons do not yet exist, China, Israel, Russia, South Korea, the United Kingdom and the United States are all working to develop them. They argue that the technology would process information faster and keep soldiers off the battlefield.

The possibility that fully autonomous weapons could soon become a reality makes it imperative for those and other countries to apply the Martens Clause and assess whether the technology would offend basic humanity and the public conscience. Our analysis finds that fully autonomous weapons would fail the test on both counts.

I encourage you to read the essay in its entirety and for anyone who thinks the discussion about ethics and killer ‘bots is new or limited to military use, there’s my July 25, 2016 posting about police use of a robot in Dallas, Texas. (I imagine the discussion predates 2016 but that’s the earliest instance I have here.)

Teacher bots

Robots come in many forms and this one is on the humanoid end of the spectum,

Children watch a Keeko robot at the Yiswind Institute of Multicultural Education in Beijing, where the intelligent machines are telling stories and challenging kids with logic problems  [donwloaded from https://phys.org/news/2018-08-robot-teachers-invade-chinese-kindergartens.html]

Don’t those ‘eyes’ look almost heart-shaped? No wonder the kids love these robots, if an August  29, 2018 news item on phys.org can be believed,

The Chinese kindergarten children giggled as they worked to solve puzzles assigned by their new teaching assistant: a roundish, short educator with a screen for a face.

Just under 60 centimetres (two feet) high, the autonomous robot named Keeko has been a hit in several kindergartens, telling stories and challenging children with logic problems.

Round and white with a tubby body, the armless robot zips around on tiny wheels, its inbuilt cameras doubling up both as navigational sensors and a front-facing camera allowing users to record video journals.

In China, robots are being developed to deliver groceries, provide companionship to the elderly, dispense legal advice and now, as Keeko’s creators hope, join the ranks of educators.

At the Yiswind Institute of Multicultural Education on the outskirts of Beijing, the children have been tasked to help a prince find his way through a desert—by putting together square mats that represent a path taken by the robot—part storytelling and part problem-solving.

Each time they get an answer right, the device reacts with delight, its face flashing heart-shaped eyes.

“Education today is no longer a one-way street, where the teacher teaches and students just learn,” said Candy Xiong, a teacher trained in early childhood education who now works with Keeko Robot Xiamen Technology as a trainer.

“When children see Keeko with its round head and body, it looks adorable and children love it. So when they see Keeko, they almost instantly take to it,” she added.

Keeko robots have entered more than 600 kindergartens across the country with its makers hoping to expand into Greater China and Southeast Asia.

Beijing has invested money and manpower in developing artificial intelligence as part of its “Made in China 2025” plan, with a Chinese firm last year unveiling the country’s first human-like robot that can hold simple conversations and make facial expressions.

According to the International Federation of Robots, China has the world’s top industrial robot stock, with some 340,000 units in factories across the country engaged in manufacturing and the automotive industry.

Moving on from hardware/software to a software only story.

AI fashion designer better than Balenciaga?

Despite the title for Katharine Schwab’s August 22, 2018 article for Fast Company, I don’t think this AI designer is better than Balenciaga but from the pictures I’ve seen the designs are as good and it does present some intriguing possibilities courtesy of its neural network (Note: Links have been removed),

The AI, created by researcher Robbie Barat, has created an entire collection based on Balenciaga’s previous styles. There’s a fabulous pink and red gradient jumpsuit that wraps all the way around the model’s feet–like a onesie for fashionistas–paired with a dark slouchy coat. There’s a textural color-blocked dress, paired with aqua-green tights. And for menswear, there’s a multi-colored, shimmery button-up with skinny jeans and mismatched shoes. None of these looks would be out of place on the runway.

To create the styles, Barat collected images of Balenciaga’s designs via the designer’s lookbooks, ad campaigns, runway shows, and online catalog over the last two months, and then used them to train the pix2pix neural net. While some of the images closely resemble humans wearing fashionable clothes, many others are a bit off–some models are missing distinct limbs, and don’t get me started on how creepy [emphasis mine] their faces are. Even if the outfits aren’t quite ready to be fabricated, Barat thinks that designers could potentially use a tool like this to find inspiration. Because it’s not constrained by human taste, style, and history, the AI comes up with designs that may never occur to a person. “I love how the network doesn’t really understand or care about symmetry,” Barat writes on Twitter.

You can see the ‘creepy’ faces and some of the designs here,

Image: Robbie Barat

In contrast to the previous two stories, this all about algorithms, no machinery with independent movement (robot hardware) needed.

Conversational android: Erica

Hiroshi Ishiguro and his lifelike (definitely humanoid) robots have featured here many, many times before. The most recent posting is a March 27, 2017 posting about his and his android’s participation at the 2017 SXSW festival.

His latest work is featured in an August 21, 2018 news news item on ScienceDaily,

We’ve all tried talking with devices, and in some cases they talk back. But, it’s a far cry from having a conversation with a real person.

Now a research team from Kyoto University, Osaka University, and the Advanced Telecommunications Research Institute, or ATR, have significantly upgraded the interaction system for conversational android ERICA, giving her even greater dialog skills.

ERICA is an android created by Hiroshi Ishiguro of Osaka University and ATR, specifically designed for natural conversation through incorporation of human-like facial expressions and gestures. The research team demonstrated the updates during a symposium at the National Museum of Emerging Science in Tokyo.

Here’s the latest conversational android, Erica

Caption: The experimental set up when the subject (left) talks with ERICA (right) Credit: Kyoto University / Kawahara lab

An August 20, 2018 Kyoto University press release on EurekAlert, which originated the news item, offers more details,

When we talk to one another, it’s never a simple back and forward progression of information,” states Tatsuya Kawahara of Kyoto University’s Graduate School of Informatics, and an expert in speech and audio processing.

“Listening is active. We express agreement by nodding or saying ‘uh-huh’ to maintain the momentum of conversation. This is called ‘backchanneling’, and is something we wanted to implement with ERICA.”

The team also focused on developing a system for ‘attentive listening’. This is when a listener asks elaborating questions, or repeats the last word of the speaker’s sentence, allowing for more engaging dialogue.

Deploying a series of distance sensors, facial recognition cameras, and microphone arrays, the team began collecting data on parameters necessary for a fluid dialog between ERICA and a human subject.

“We looked at three qualities when studying backchanneling,” continues Kawahara. “These were: timing — when a response happens; lexical form — what is being said; and prosody, or how the response happens.”

Responses were generated through machine learning using a counseling dialogue corpus, resulting in dramatically improved dialog engagement. Testing in five-minute sessions with a human subject, ERICA demonstrated significantly more dynamic speaking skill, including the use of backchanneling, partial repeats, and statement assessments.

“Making a human-like conversational robot is a major challenge,” states Kawahara. “This project reveals how much complexity there is in listening, which we might consider mundane. We are getting closer to a day where a robot can pass a Total Turing Test.”

Erica seems to have been first introduced publicly in Spring 2017, from an April 2017 Erica: Man Made webpage on The Guardian website,

Erica is 23. She has a beautiful, neutral face and speaks with a synthesised voice. She has a degree of autonomy – but can’t move her hands yet. Hiroshi Ishiguro is her ‘father’ and the bad boy of Japanese robotics. Together they will redefine what it means to be human and reveal that the future is closer than we might think.

Hiroshi Ishiguro and his colleague Dylan Glas are interested in what makes a human. Erica is their latest creation – a semi-autonomous android, the product of the most funded scientific project in Japan. But these men regard themselves as artists more than scientists, and the Erica project – the result of a collaboration between Osaka and Kyoto universities and the Advanced Telecommunications Research Institute International – is a philosophical one as much as technological one.

Erica is interviewed about her hope and dreams – to be able to leave her room and to be able to move her arms and legs. She likes to chat with visitors and has one of the most advanced speech synthesis systems yet developed. Can she be regarded as being alive or as a comparable being to ourselves? Will she help us to understand ourselves and our interactions as humans better?

Erica and her creators are interviewed in the science fiction atmosphere of Ishiguro’s laboratory, and this film asks how we might form close relationships with robots in the future. Ishiguro thinks that for Japanese people especially, everything has a soul, whether human or not. If we don’t understand how human hearts, minds and personalities work, can we truly claim that humans have authenticity that machines don’t?

Ishiguro and Glas want to release Erica and her fellow robots into human society. Soon, Erica may be an essential part of our everyday life, as one of the new children of humanity.

Key credits

  • Director/Editor: Ilinca Calugareanu
  • Producer: Mara Adina
  • Executive producers for the Guardian: Charlie Phillips and Laurence Topham
  • This video is produced in collaboration with the Sundance Institute Short Documentary Fund supported by the John D and Catherine T MacArthur Foundation

You can also view the 14 min. film here.

Artworks generated by an AI system are to be sold at Christie’s auction house

KC Ifeanyi’s August 22, 2018 article for Fast Company may send a chill down some artists’ spines,

For the first time in its 252-year history, Christie’s will auction artwork generated by artificial intelligence.

Created by the French art collective Obvious, “Portrait of Edmond de Belamy” is part of a series of paintings of the fictional Belamy family that was created using a two-part algorithm. …

The portrait is estimated to sell anywhere between $7,000-$10,000, and Obvious says the proceeds will go toward furthering its algorithm.

… Famed collector Nicolas Laugero-Lasserre bought one of Obvious’s Belamy works in February, which could’ve been written off as a novel purchase where the story behind it is worth more than the piece itself. However, with validation from a storied auction house like Christie’s, AI art could shake the contemporary art scene.

“Edmond de Belamy” goes up for auction from October 23-25 [2018].

Jobs safe from automation? Are there any?

Michael Grothaus expresses more optimism about future job markets than I’m feeling in an August 30, 2018 article for Fast Company,

A 2017 McKinsey Global Institute study of 800 occupations across 46 countries found that by 2030, 800 million people will lose their jobs to automation. That’s one-fifth of the global workforce. A further one-third of the global workforce will need to retrain if they want to keep their current jobs as well. And looking at the effects of automation on American jobs alone, researchers from Oxford University found that “47 percent of U.S. workers have a high probability of seeing their jobs automated over the next 20 years.”

The good news is that while the above stats are rightly cause for concern, they also reveal that 53% of American jobs and four-fifths of global jobs are unlikely to be affected by advances in artificial intelligence and robotics. But just what are those fields? I spoke to three experts in artificial intelligence, robotics, and human productivity to get their automation-proof career advice.

Creatives

“Although I believe every single job can, and will, benefit from a level of AI or robotic influence, there are some roles that, in my view, will never be replaced by technology,” says Tom Pickersgill, …

Maintenance foreman

When running a production line, problems and bottlenecks are inevitable–and usually that’s a bad thing. But in this case, those unavoidable issues will save human jobs because their solutions will require human ingenuity, says Mark Williams, head of product at People First, …

Hairdressers

Mat Hunter, director of the Central Research Laboratory, a tech-focused co-working space and accelerator for tech startups, have seen startups trying to create all kinds of new technologies, which has given him insight into just what machines can and can’t pull off. It’s lead him to believe that jobs like the humble hairdresser are safer from automation than those of, says, accountancy.

Therapists and social workers

Another automation-proof career is likely to be one involved in helping people heal the mind, says Pickersgill. “People visit therapists because there is a need for emotional support and guidance. This can only be provided through real human interaction–by someone who can empathize and understand, and who can offer advice based on shared experiences, rather than just data-driven logic.”

Teachers

Teachers are so often the unsung heroes of our society. They are overworked and underpaid–yet charged with one of the most important tasks anyone can have: nurturing the growth of young people. The good news for teachers is that their jobs won’t be going anywhere.

Healthcare workers

Doctors and nurses will also likely never see their jobs taken by automation, says Williams. While automation will no doubt better enhance the treatments provided by doctors and nurses the fact of the matter is that robots aren’t going to outdo healthcare workers’ ability to connect with patients and make them feel understood the way a human can.

Caretakers

While humans might be fine with robots flipping their burgers and artificial intelligence managing their finances, being comfortable with a robot nannying your children or looking after your elderly mother is a much bigger ask. And that’s to say nothing of the fact that even today’s most advanced robots don’t have the physical dexterity to perform the movements and actions carers do every day.

Grothaus does offer a proviso in his conclusion: certain types of jobs are relatively safe until developers learn to replicate qualities such as empathy in robots/AI.

It’s very confusing

There’s so much news about robots, artificial intelligence, androids, and cyborgs that it’s hard to keep up with it let alone attempt to get a feeling for where all this might be headed. When you add the fact that the term robots/artificial inteligence are often used interchangeably and that the distinction between robots/androids/cyborgs is not always clear any attempts to peer into the future become even more challenging.

At this point I content myself with tracking the situation and finding definitions so I can better understand what I’m tracking. Carmen Wong’s August 23, 2018 posting on the Signals blog published by Canada’s Centre for Commercialization of Regenerative Medicine (CCRM) offers some useful definitions in the context of an article about the use of artificial intelligence in the life sciences, particularly in Canada (Note: Links have been removed),

Artificial intelligence (AI). Machine learning. To most people, these are just buzzwords and synonymous. Whether or not we fully understand what both are, they are slowly integrating into our everyday lives. Virtual assistants such as Siri? AI is at work. The personalized ads you see when you are browsing on the web or movie recommendations provided on Netflix? Thank AI for that too.

AI is defined as machines having intelligence that imitates human behaviour such as learning, planning and problem solving. A process used to achieve AI is called machine learning, where a computer uses lots of data to “train” or “teach” itself, without human intervention, to accomplish a pre-determined task. Essentially, the computer keeps on modifying its algorithm based on the information provided to get to the desired goal.

Another term you may have heard of is deep learning. Deep learning is a particular type of machine learning where algorithms are set up like the structure and function of human brains. It is similar to a network of brain cells interconnecting with each other.

Toronto has seen its fair share of media-worthy AI activity. The Government of Canada, Government of Ontario, industry and multiple universities came together in March 2018 to launch the Vector Institute, with the goal of using AI to promote economic growth and improve the lives of Canadians. In May, Samsung opened its AI Centre in the MaRS Discovery District, joining a network of Samsung centres located in California, United Kingdom and Russia.

There has been a boom in AI companies over the past few years, which span a variety of industries. This year’s ranking of the top 100 most promising private AI companies covers 25 fields with cybersecurity, enterprise and robotics being the hot focus areas.

Wong goes on to explore AI deployment in the life sciences and concludes that human scientists and doctors will still be needed although she does note this in closing (Note: A link has been removed),

More importantly, empathy and support from a fellow human being could never be fully replaced by a machine (could it?), but maybe this will change in the future. We will just have to wait and see.

Artificial empathy is the term used in Lisa Morgan’s April 25, 2018 article for Information Week which unfortunately does not include any links to actual projects or researchers working on artificial empathy. Instead, the article is focused on how business interests and marketers would like to see it employed. FWIW, I have found a few references: (1) Artificial empathy Wikipedia essay (look for the references at the end of the essay for more) and (2) this open access article: Towards Artificial Empathy; How Can Artificial Empathy Follow the Developmental Pathway of Natural Empathy? by Minoru Asada.

Please let me know in the comments if you should have an insights on the matter in the comments section of this blog.

Extinction of Experience (EOE)

‘Extinction of experience’ is a bit of an attention getter isn’t it? Well, it worked for me when I first saw it and it seems particularly apt after putting together my August 9, 2018 posting about the 2018 SIGGRAPH conference, in particular, the ‘Previews’ where I featured a synthetic sound project. Here’s a little more about EOE from a July 3, 2018 news item on phys.org,

Opportunities for people to interact with nature have declined over the past century, as most people now live in urban areas and spend much of their time indoors. And while adults are not only experiencing nature less, they are also less likely to take their children outdoors and shape their attitudes toward nature, creating a negative cycle. In 1978, ecologist Robert Pyle coined the phrase “extinction of experience” (EOE) to describe this alienation from nature, and argued that this process is one of the greatest causes of the biodiversity crisis. Four decades later, the question arises: How can we break the cycle and begin to reverse EOE?

A July 3, 2018 North Carolina Museum of Natural Sciences news release, which originated the news item, delves further,

In citizen science programs, people participate in real research, helping scientists conduct studies on local, regional and even global scales. In a study released today, researchers from the North Carolina Museum of Natural Sciences, North Carolina State University, Rutgers University, and the Technion-Israel Institute of Technology propose nature-based citizen science as a means to reconnect people to nature. For people to take the next step and develop a desire to preserve nature, they need to not only go outdoors or learn about nature, but to develop emotional connections to and empathy for nature. Because citizen science programs usually involve data collection, they encourage participants to search for, observe and investigate natural elements around them. According to co-author Caren Cooper, assistant head of the Biodiversity Lab at the N.C. Museum of Natural Sciences, “Nature-based citizen science provides a structure and purpose that might help people notice nature around them and appreciate it in their daily lives.”

To search for evidence of these patterns across programs and the ability of citizen science to reach non-scientific audiences, the researchers studied the participants of citizen science programs. They reviewed 975 papers, analyzed results from studies that included participants’ motivations and/or outcomes in nature-oriented programs, and found that nature-based citizen science fosters cognitive and emotional aspects of experiences in nature, giving it the potential to reverse EOE.

The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher. The eMammal citizen science programs offer children opportunities to use technology to observe nature in new ways. Photo: Matt Zeher.

The N.C. Museum of Natural Sciences’ Stephanie Schuttler, lead author on the study and scientist on the eMammal citizen science camera trapping program, saw anecdotal evidence of this reversal through her work incorporating camera trap research into K-12 classrooms. “Teachers would tell me how excited and surprised students were about the wildlife in their school yards,” Schuttler says. “They had no idea their campus flourished with coyotes, foxes and deer.” The study Schuttler headed shows citizen science increased participants’ knowledge, skills, interest in and curiosity about nature, and even produced positive behavioral changes. For example, one study revealed that participants in the Garden Butterfly Watch program changed gardening practices to make their yards more hospitable to wildlife. Another study found that participants in the Coastal Observation and Seabird Survey Team program started cleaning up beaches during surveys, even though this was never suggested by the facilitators.

While these results are promising, the EOE study also revealed that this work has only just begun and that most programs do not reach audiences who are not already engaged in science or nature. Only 26 of the 975 papers evaluated participants’ motivations and/or outcomes, and only one of these papers studied children, the most important demographic in reversing EOE. “Many studies were full of amazing stories on how citizen science awakened participants to the nature around them, however, most did not study outcomes,” Schuttler notes. “To fully evaluate the ability for nature-based citizen science to affect people, we encourage citizen science programs to formally study their participants and not just study the system in question.”

Additionally, most citizen science programs attracted or even recruited environmentally mindful participants who likely already spend more time outside than the average person. “If we really want to reconnect people to nature, we need to preach beyond the choir, and attract people who are not already interested in science and/or nature,” Schuttler adds. And as co-author Assaf Shwartz of Technion-Israel Institute of Technology asserts, “The best way to avert the extinction of experience is to create meaningful experiences of nature in the places where we all live and work – cities. Participating in citizen science is an excellent way to achieve this goal, as participation can enhance the sense of commitment people have to protect nature.”

Luckily, some other factors appear to influence participants’ involvement in citizen science. Desire for wellbeing, stewardship and community may provide a gateway for people to participate, an important first step in connecting people to nature. Though nature-based citizen science programs provide opportunities for people to interact with nature, further research on the mechanisms that drive this relationship is needed to strengthen our understanding of various outcomes of citizen science.

And, I because I love dragonflies,

Nature-based citizen science programs, like Dragonfly Pond Watch, offer participants opportunities to observe nature more closely. Credit: Lea Shell.

Here’s a link to and a citation for the paper,

Bridging the nature gap: can citizen science reverse the extinction of experience? by Stephanie G Schuttler, Amanda E Sorensen, Rebecca C Jordan, Caren Cooper, Assaf Shwartz. Frontiers in Ecology and the Environment. DOI: https://doi.org/10.1002/fee.1826 First published: 03 July 2018

This paper is behind a paywall.

In-home (one day in the future) eyesight correction

It’s easy to become blasé about ‘futuristic’ developments but every once in a while something comes along that shocks you out of your complacency as this March 8, 2018 news item did for me,

A revolutionary, cutting-edge technology, developed by researchers at Bar-Ilan University’s Institute of Nanotechnology and Advanced Materials (BINA), has the potential to provide a new alternative to eyeglasses, contact lenses, and laser correction for refractive errors.

The technology, known as Nano-Drops, was developed by opthamologist Dr. David Smadja from Shaare Zedek Medical Center, Prof. Zeev Zalevsky from Bar-Ilan’s Kofkin Faculty of Engineering, and Prof. Jean-Paul Moshe Lellouche, head of the Department of Chemistry at Bar-Ilan.

It seems like it would be eye drops, eh? This March 8, 2018 Bar-Ilan University press release, which originated the news item, proceeds to redefine eyedrops,

Nano-Drops achieve their optical effect and correction by locally modifying the corneal refractive index. The magnitude and nature of the optical correction is adjusted by an optical pattern that is stamped onto the superficial layer of the corneal epithelium with a laser source. The shape of the optical pattern can be adjusted for correction of myopia (nearsightedness), hyperopia (farsightedness) or presbyopia (loss of accommodation ability). The laser stamping onto the cornea [emphasis mine] takes a few milliseconds and enables the nanoparticles to enhance and ‘activate’ this optical pattern by locally changing the refractive index and ultimately modifying the trajectory of light passing through the cornea.

The laser stamping source does not relate to the commonly known ‘laser treatment for visual correction’ that ablates corneal tissue. It is rather a small laser device that can connect to a smartphone [emphasis mine] and stamp the optical pattern onto the corneal epithelium by placing numerous adjacent pulses in a very speedy and painless fashion.  Tiny corneal spots created by the laser allow synthetic and biocompatible nanoparticles to enter and locally modify the optical power of the eye [emphasis mine] at the desired correction.

In the future this technology may enable patients to have their vision corrected in the comfort of their own home. [emphasis mine] To accomplish this, they would open an application on their smartphone to measure their vision, connect the laser source device for stamping the optical pattern at the desired correction, and then apply the Nano-Drops to activate the pattern and provide the desired correction.

Upcoming in-vivo experiments in rabbits will allow the researchers to determine how long the effect of the Nano-Drops will last after the initial application. Meanwhile, this promising technology has been shown, through ex-vivo experiments, to efficiently correct nearly 3 diopters of both myopia and presbyopia in pig eyes.

The researchers do not seem to have published a paper about this work. However, there is a March 19, 2018 article by Shoshanna Solomon for the Times of Israel, which provides greater  detail about how you or I would use this technology,

The Israeli researchers came up with a way to reshape the cornea, which accounts for 60 percent of the eye’s optical power. They tried out their system on the eyes of dead pigs, which have an optical system that is very similar to that of humans.

There are three steps to the technology that is now in development.

The first step requires patients to measure their eyesight via their smartphones. There are already a number of apps that do this, said Smadja. The second step requires the patients to use a second app — being developed by the researchers — which would have a laser device clipped onto the smartphone. This device will deliver laser pulses to the eye in less than a second that etch a shallow shape onto the cornea to help correct its refractive error. During the last stage, the Nano-Drops — made up of nontoxic nanoparticles of proteins — are put into the eye and they activate the shape, thus correcting the patients’ vision.

“It’s like when you write something with fuel on the ground and the fuel dries up, and then you throw a flame onto the fuel and the fire takes the shape of the writing,” Smadja explained. “The drops activate the pattern.”

The technology, unlike current laser operations that correct eyesight, does not remove tissue and is thus noninvasive, and it suits most eyes, expanding the scope of patients who can correct their vision, he said.

It’s a good article and, if you have the time, it’s worth reading in its entirety. Of course, it’s a long from ‘being in development’ to ‘available at the store’.

AI x 2: the Amnesty International and Artificial Intelligence story

Amnesty International and artificial intelligence seem like an unexpected combination but it all makes sense when you read a June 13, 2018 article by Steven Melendez for Fast Company (Note: Links have been removed),

If companies working on artificial intelligence don’t take steps to safeguard human rights, “nightmare scenarios” could unfold, warns Rasha Abdul Rahim, an arms control and artificial intelligence researcher at Amnesty International in a blog post. Those scenarios could involve armed, autonomous systems choosing military targets with little human oversight, or discrimination caused by biased algorithms, she warns.

Rahim pointed at recent reports of Google’s involvement in the Pentagon’s Project Maven, which involves harnessing AI image recognition technology to rapidly process photos taken by drones. Google recently unveiled new AI ethics policies and has said it won’t continue with the project once its current contract expires next year after high-profile employee dissent over the project. …

“Compliance with the laws of war requires human judgement [sic] –the ability to analyze the intentions behind actions and make complex decisions about the proportionality or necessity of an attack,” Rahim writes. “Machines and algorithms cannot recreate these human skills, and nor can they negotiate, produce empathy, or respond to unpredictable situations. In light of these risks, Amnesty International and its partners in the Campaign to Stop Killer Robots are calling for a total ban on the development, deployment, and use of fully autonomous weapon systems.”

Rasha Abdul Rahim’s June 14, 2018 posting (I’m putting the discrepancy in publication dates down to timezone differences) on the Amnesty International website (Note: Links have been removed),

Last week [June 7, 2018] Google released a set of principles to govern its development of AI technologies. They include a broad commitment not to design or deploy AI in weaponry, and come in the wake of the company’s announcement that it will not renew its existing contract for Project Maven, the US Department of Defense’s AI initiative, when it expires in 2019.

The fact that Google maintains its existing Project Maven contract for now raises an important question. Does Google consider that continuing to provide AI technology to the US government’s drone programme is in line with its new principles? Project Maven is a litmus test that allows us to see what Google’s new principles mean in practice.

As details of the US drone programme are shrouded in secrecy, it is unclear precisely what role Google plays in Project Maven. What we do know is that US drone programme, under successive administrations, has been beset by credible allegations of unlawful killings and civilian casualties. The cooperation of Google, in any capacity, is extremely troubling and could potentially implicate it in unlawful strikes.

As AI technology advances, the question of who will be held accountable for associated human rights abuses is becoming increasingly urgent. Machine learning, and AI more broadly, impact a range of human rights including privacy, freedom of expression and the right to life. It is partly in the hands of companies like Google to safeguard these rights in relation to their operations – for us and for future generations. If they don’t, some nightmare scenarios could unfold.

Warfare has already changed dramatically in recent years – a couple of decades ago the idea of remote controlled bomber planes would have seemed like science fiction. While the drones currently in use are still controlled by humans, China, France, Israel, Russia, South Korea, the UK and the US are all known to be developing military robots which are getting smaller and more autonomous.

For example, the UK is developing a number of autonomous systems, including the BAE [Systems] Taranis, an unmanned combat aircraft system which can fly in autonomous mode and automatically identify a target within a programmed area. Kalashnikov, the Russian arms manufacturer, is developing a fully automated, high-calibre gun that uses artificial neural networks to choose targets. The US Army Research Laboratory in Maryland, in collaboration with BAE Systems and several academic institutions, has been developing micro drones which weigh less than 30 grams, as well as pocket-sized robots that can hop or crawl.

Of course, it’s not just in conflict zones that AI is threatening human rights. Machine learning is already being used by governments in a wide range of contexts that directly impact people’s lives, including policing [emphasis mine], welfare systems, criminal justice and healthcare. Some US courts use algorithms to predict future behaviour of defendants and determine their sentence lengths accordingly. The potential for this approach to reinforce power structures, discrimination or inequalities is huge.

In july 2017, the Vancouver Police Department announced its use of predictive policing software, the first such jurisdiction in Canada to make use of the technology. My Nov. 23, 2017 posting featured the announcement.

The almost too aptly named Campaign to Stop Killer Robots can be found here. Their About Us page provides a brief history,

Formed by the following non-governmental organizations (NGOs) at a meeting in New York on 19 October 2012 and launched in London in April 2013, the Campaign to Stop Killer Robots is an international coalition working to preemptively ban fully autonomous weapons. See the Chronology charting our major actions and achievements to date.

Steering Committee

The Steering Committee is the campaign’s principal leadership and decision-making body. It is comprised of five international NGOs, a regional NGO network, and four national NGOs that work internationally:

Human Rights Watch
Article 36
Association for Aid and Relief Japan
International Committee for Robot Arms Control
Mines Action Canada
Nobel Women’s Initiative
PAX (formerly known as IKV Pax Christi)
Pugwash Conferences on Science & World Affairs
Seguridad Humana en América Latina y el Caribe (SEHLAC)
Women’s International League for Peace and Freedom

For more information, see this Overview. A Terms of Reference is also available on request, detailing the committee’s selection process, mandate, decision-making, meetings and communication, and expected commitments.

For anyone who may be interested in joining Amnesty International, go here.

Do you want that coffee with some graphene on toast?

These scientists are excited:

For those who prefer text, here’s the Rice University Feb. 13, 2018 news release (received via email and available online here and on EurekAlert here) Note: Links have been removed),

Rice University scientists who introduced laser-induced graphene (LIG) have enhanced their technique to produce what may become a new class of edible electronics.

The Rice lab of chemist James Tour, which once turned Girl Scout cookies into graphene, is investigating ways to write graphene patterns onto food and other materials to quickly embed conductive identification tags and sensors into the products themselves.

“This is not ink,” Tour said. “This is taking the material itself and converting it into graphene.”

The process is an extension of the Tour lab’s contention that anything with the proper carbon content can be turned into graphene. In recent years, the lab has developed and expanded upon its method to make graphene foam by using a commercial laser to transform the top layer of an inexpensive polymer film.

The foam consists of microscopic, cross-linked flakes of graphene, the two-dimensional form of carbon. LIG can be written into target materials in patterns and used as a supercapacitor, an electrocatalyst for fuel cells, radio-frequency identification (RFID) antennas and biological sensors, among other potential applications.

The new work reported in the American Chemical Society journal ACS Nano demonstrated that laser-induced graphene can be burned into paper, cardboard, cloth, coal and certain foods, even toast.

“Very often, we don’t see the advantage of something until we make it available,” Tour said. “Perhaps all food will have a tiny RFID tag that gives you information about where it’s been, how long it’s been stored, its country and city of origin and the path it took to get to your table.”

He said LIG tags could also be sensors that detect E. coli or other microorganisms on food. “They could light up and give you a signal that you don’t want to eat this,” Tour said. “All that could be placed not on a separate tag on the food, but on the food itself.”

Multiple laser passes with a defocused beam allowed the researchers to write LIG patterns into cloth, paper, potatoes, coconut shells and cork, as well as toast. (The bread is toasted first to “carbonize” the surface.) The process happens in air at ambient temperatures.

“In some cases, multiple lasing creates a two-step reaction,” Tour said. “First, the laser photothermally converts the target surface into amorphous carbon. Then on subsequent passes of the laser, the selective absorption of infrared light turns the amorphous carbon into LIG. We discovered that the wavelength clearly matters.”

The researchers turned to multiple lasing and defocusing when they discovered that simply turning up the laser’s power didn’t make better graphene on a coconut or other organic materials. But adjusting the process allowed them to make a micro supercapacitor in the shape of a Rice “R” on their twice-lased coconut skin.

Defocusing the laser sped the process for many materials as the wider beam allowed each spot on a target to be lased many times in a single raster scan. That also allowed for fine control over the product, Tour said. Defocusing allowed them to turn previously unsuitable polyetherimide into LIG.

“We also found we could take bread or paper or cloth and add fire retardant to them to promote the formation of amorphous carbon,” said Rice graduate student Yieu Chyan, co-lead author of the paper. “Now we’re able to take all these materials and convert them directly in air without requiring a controlled atmosphere box or more complicated methods.”

The common element of all the targeted materials appears to be lignin, Tour said. An earlier study relied on lignin, a complex organic polymer that forms rigid cell walls, as a carbon precursor to burn LIG in oven-dried wood. Cork, coconut shells and potato skins have even higher lignin content, which made it easier to convert them to graphene.

Tour said flexible, wearable electronics may be an early market for the technique. “This has applications to put conductive traces on clothing, whether you want to heat the clothing or add a sensor or conductive pattern,” he said.

Rice alumnus Ruquan Ye is co-lead author of the study. Co-authors are Rice graduate student Yilun Li and postdoctoral fellow Swatantra Pratap Singh and Professor Christopher Arnusch of Ben-Gurion University of the Negev, Israel. Tour is the T.T. and W.F. Chao Chair in Chemistry as well as a professor of computer science and of materials science and nanoengineering at Rice.

The Air Force Office of Scientific Research supported the research.

Here’s a link to and a citation for the paper,

Laser-Induced Graphene by Multiple Lasing: Toward Electronics on Cloth, Paper, and Food by Yieu Chyan, Ruquan Ye†, Yilun Li, Swatantra Pratap Singh, Christopher J. Arnusch, and James M. Tour. ACS Nano DOI: 10.1021/acsnano.7b08539 Publication Date (Web): February 13, 2018

Copyright © 2018 American Chemical Society

This paper is behind a paywall.

h/t Feb. 13, 2018 news item on Nanowerk

Implanting a synthetic cornea in your eye

For anyone who needs a refresher, Simon Shapiro in a Nov. 5, 2017 posting on the Sci/Why blog offers a good introduction to how eyes work and further in his post describes Corneat Vision’s corneal implants,

A quick summary of how our eyes work: they refract (bend) light and focus it on the retina. The job of doing the refraction is split between the cornea and the lens. Two thirds of the refraction is done by the cornea, so it’s critical in enabling vision. After light passes through the cornea, it passes through the pupil (in the centre of the iris) to reach the lens. Muscles in the eye (the ciliary muscle) can change the shape of the lens and allow the eye to focus nearer or further. The lens focuses light on the retina, which passes signals to the brain via the optic nerve.

It’s all pretty neat, but some things can go wrong, especially as you get older. Common problems are that the lens and/or the cornea can become cloudy.

CoreNeat Vision, the Israeli ophthalmic devices startup company, released an Oct. 6, 2017 press release about their corneal implant on BusinessWire (Note: Links have been removed),

The CorNeat KPro implant is a patent-pending synthetic cornea that utilizes advanced cell technology to integrate artificial optics within resident ocular tissue. The CorNeat KPro is produced using nanoscale chemical engineering that stimulates cellular growth. Unlike previous devices, which attempted to integrate optics into the native cornea, the CorNeat KPro leverages a virtual space under the conjunctiva that is rich with fibroblast cells that heals quickly and provides robust long-term integration. Combined with a novel and simple 30-minute surgical procedure, the CorNeat KPro provides an esthetic, efficient, scalable remedy for millions of people with cornea-related visual impairments and is far superior to any available biological and synthetic alternatives.

A short animated movie that demonstrates the implantation and integration of the CorNeat KPro device to the human eye is available in the following link: www.corneat.com/product-animation.

“Corneal pathology is the second leading cause of blindness worldwide with 20-30 million patients in need of a remedy and around 2 million new cases/year, said CorNeat Vision CEO and VP R&D, Mr. Almog Aley-Raz. “Though a profound cause of distress and disability, existing solutions, such as corneal transplantation, are carried out only about 200,000 times/year worldwide. Together, corneal transplantation, and to a much lesser extent artificial implants (KPros), address only 5%-10% of cases, “There exists an urgent need for an efficient, long-lasting and affordable solution to corneal pathology, injury and blindness, which would alleviate the suffering and disability of millions of people. We are very excited to reach this important milestone in the development of our solution and are confident that the CorNeat KPro will enable millions to regain their sight”, he added.

“The groundbreaking results obtained in our proof of concept which is backed by conclusive histopathological evidence, are extremely encouraging. We are entering the next phase with great confidence that CorNeat KPro will address corneal blindness just like IOLs (Intra Ocular Lens) addressed cataract”, commented Dr. Gilad Litvin, CorNeat Vision’s Chief Medical Officer and founder and the CorNeat KPro inventor. “Our novel IP, now cleared by the European Patent Office, ensures long-term retention, robust integration into the eye and an operation that is significantly shorter and simpler than Keratoplasty (Corneal transplantation).

“The innovative approach behind CorNeat KPro coupled by the team’s execution ability present a unique opportunity to finally address the global corneal blindness challenge”, added Prof. Ehud Assia., head of the ophthalmic department at the Meir Hospital in Israel, a serial ophthalmic innovator, and a member of CorNeat Vision scientific advisory board. “I welcome our new advisory board members, Prof. David Rootman, a true pioneer in ophthalmic surgery and one of the top corneal specialist surgeons from the University of Toronto, Canada, and Prof. Eric Gabison., who’s a leading cornea surgeon at the Rothschild Ophthalmic Foundation research center at Bichat hospital – Paris, France. We are all looking forward to initiating the clinical trial later in 2018.”

About CorNeat Vision

CorNeat Vision is an ophthalmic medical device company with an overarching mission to promote human health, sustainability and equality worldwide. The objective of CorNeat Vision is to produce, test and market an innovative, safe and long-lasting scalable medical solution for corneal blindness, pathology and injury, a bio-artificial organ: The CorNeat KPro. For more information on CorNeat Vision and the CorNeat KPro device, visit us at www.corneat.com.

Unfortunately, I cannot find any more detail. Presumably the company principals are making sure that no competitive advantages are given away.