Tag Archives: Harvard University

Using sound to transfer quantum information

It seems sound is becoming more prominent as a means of science data communication (data sonification) and in this upcoming case, data transfer. From a June 5, 2018 news item on ScienceDaily,

Quantum physics is on the brink of a technological breakthrough: new types of sensors, secure data transmission methods and maybe even computers could be made possible thanks to quantum technologies. However, the main obstacle here is finding the right way to couple and precisely control a sufficient number of quantum systems (for example, individual atoms).

A team of researchers from TU Wien and Harvard University has found a new way to transfer the necessary quantum information. They propose using tiny mechanical vibrations. The atoms are coupled with each other by ‘phonons’ — the smallest quantum mechanical units of vibrations or sound waves.

A June 5, 2018 Technical University of Vienna (TU Wien) press release, which originated the news item, explains the work in greater detail,

“We are testing tiny diamonds with built-in silicon atoms – these quantum systems are particularly promising,” says Professor Peter Rabl from TU Wien. “Normally, diamonds are made exclusively of carbon, but adding silicon atoms in certain places creates defects in the crystal lattice where quantum information can be stored.” These microscopic flaws in the crystal lattice can be used like a tiny switch that can be switched between a state of higher energy and a state of lower energy using microwaves.

Together with a team from Harvard University, Peter Rabl’s research group has developed a new idea to achieve the targeted coupling of these quantum memories within the diamond. One by one they can be built into a tiny diamond rod measuring only a few micrometres in length, like individual pearls on a necklace. Just like a tuning fork, this rod can then be made to vibrate – however, these vibrations are so small that they can only be described using quantum theory. It is through these vibrations that the silicon atoms can form a quantum-mechanical link to each other.

“Light is made from photons, the quantum of light. In the same way, mechanical vibrations or sound waves can also be described in a quantum-mechanical manner. They are comprised of phonons – the smallest possible units of mechanical vibration,” explains Peter Rabl. As the research team has now been able to show using simulation calculations, any number of these quantum memories can be linked together in the diamond rod thanks to these phonons. The individual silicon atoms are “switched on and off” using microwaves. During this process, they emit or absorb phonons. This creates a quantum entanglement of different silicon defects, thus allowing quantum information to be transferred.

The road to a scalable quantum network
Until now it was not clear whether something like this was even possible: “Usually you would expect the phonons to be absorbed somewhere, or to come into contact with the environment and thus lose their quantum mechanical properties,” says Peter Rabl. “Phonons are the enemy of quantum information, so to speak. But with our calculations, we were able to show that, when controlled appropriately using microwaves, the phonons are in fact useable for technical applications.”

The main advantage of this new technology lies in its scalability: “There are many ideas for quantum systems that, in principle, can be used for technological applications. The biggest problem is that it is very difficult to connect enough of them to be able to carry out complicated computing operations,” says Peter Rabl. The new strategy of using phonons for this purpose could pave the way to a scalable quantum technology.

Here’s a link to and a citation for the paper,

Phonon Networks with Silicon-Vacancy Centers in Diamond Waveguides by M.-A. Lemonde, S. Meesala, A. Sipahigil, M. J. A. Schuetz, M. D. Lukin, M. Loncar, and P. Rabl. Phys. Rev. Lett. 120 (21), 213603 DOI:https://doi.org/10.1103/PhysRevLett.120.213603 Published 25 May 2018

This paper is behind a paywall.

I found it at the movies: a commentary on/review of “Films from the Future”

Kudos to anyone who recognized the reference to Pauline Kael (she changed film criticism forever) and her book “I Lost it at the Movies.” Of course, her book title was a bit of sexual innuendo, quite risqué for an important film critic in 1965 but appropriate for a period (the 1960s) associated with a sexual revolution. (There’s more about the 1960’s sexual revolution in the US along with mention of a prior sexual revolution in the 1920s in this Wikipedia entry.)

The title for this commentary is based on an anecdote from Dr. Andrew Maynard’s (director of the Arizona State University [ASU] Risk Innovation Lab) popular science and technology book, “Films from the Future: The Technology and Morality of Sci-Fi Movies.”

The ‘title-inspiring’ anecdote concerns Maynard’s first viewing of ‘2001: A Space Odyssey, when as a rather “bratty” 16-year-old who preferred to read science fiction, he discovered new ways of seeing and imaging the world. Maynard isn’t explicit about when he became a ‘techno nerd’ or how movies gave him an experience books couldn’t but presumably at 16 he was already gearing up for a career in the sciences. That ‘movie’ revelation received in front of a black and white television on January 1,1982 eventually led him to write, “Films from the Future.” (He has a PhD in physics which he is now applying to the field of risk innovation. For a more detailed description of Dr. Maynard and his work, there’s his ASU profile webpage and, of course, the introduction to his book.)

The book is quite timely. I don’t know how many people have noticed but science and scientific innovation is being covered more frequently in the media than it has been in many years. Science fairs and festivals are being founded on what seems to be a daily basis and you can now find science in art galleries. (Not to mention the movies and television where science topics are covered in comic book adaptations, in comedy, and in standard science fiction style.) Much of this activity is centered on what’s called ’emerging technologies’. These technologies are why people argue for what’s known as ‘blue sky’ or ‘basic’ or ‘fundamental’ science for without that science there would be no emerging technology.

Films from the Future

Isn’t reading the Table of Contents (ToC) the best way to approach a book? (From Films from the Future; Note: The formatting has been altered),

Table of Contents
Chapter One
In the Beginning 14
Beginnings 14
Welcome to the Future 16
The Power of Convergence 18
Socially Responsible Innovation 21
A Common Point of Focus 25
Spoiler Alert 26
Chapter Two
Jurassic Park: The Rise of Resurrection Biology 27
When Dinosaurs Ruled the World 27
De-Extinction 31
Could We, Should We? 36
The Butterfly Effect 39
Visions of Power 43
Chapter Three
Never Let Me Go: A Cautionary Tale of Human Cloning 46
Sins of Futures Past 46
Cloning 51
Genuinely Human? 56
Too Valuable to Fail? 62
Chapter Four
Minority Report: Predicting Criminal Intent 64
Criminal Intent 64
The “Science” of Predicting Bad Behavior 69
Criminal Brain Scans 74
Machine Learning-Based Precognition 77
Big Brother, Meet Big Data 79
Chapter Five
Limitless: Pharmaceutically-enhanced Intelligence 86
A Pill for Everything 86
The Seduction of Self-Enhancement 89
Nootropics 91
If You Could, Would You? 97
Privileged Technology 101
Our Obsession with Intelligence 105
Chapter Six
Elysium: Social Inequity in an Age of Technological
Extremes 110
The Poor Shall Inherit the Earth 110
Bioprinting Our Future Bodies 115
The Disposable Workforce 119
Living in an Automated Future 124
Chapter Seven
Ghost in the Shell: Being Human in an
Augmented Future 129
Through a Glass Darkly 129
Body Hacking 135
More than “Human”? 137
Plugged In, Hacked Out 142
Your Corporate Body 147
Chapter Eight
Ex Machina: AI and the Art of Manipulation 154
Plato’s Cave 154
The Lure of Permissionless Innovation 160
Technologies of Hubris 164
Superintelligence 169
Defining Artificial Intelligence 172
Artificial Manipulation 175
Chapter Nine
Transcendence: Welcome to the Singularity 180
Visions of the Future 180
Technological Convergence 184
Enter the Neo-Luddites 190
Techno-Terrorism 194
Exponential Extrapolation 200
Make-Believe in the Age of the Singularity 203
Chapter Ten
The Man in the White Suit: Living in a Material World 208
There’s Plenty of Room at the Bottom 208
Mastering the Material World 213
Myopically Benevolent Science 220
Never Underestimate the Status Quo 224
It’s Good to Talk 227
Chapter Eleven
Inferno: Immoral Logic in an Age of
Genetic Manipulation 231
Decoding Make-Believe 231
Weaponizing the Genome 234
Immoral Logic? 238
The Honest Broker 242
Dictating the Future 248
Chapter Twelve
The Day After Tomorrow: Riding the Wave of
Climate Change 251
Our Changing Climate 251
Fragile States 255
A Planetary “Microbiome” 258
The Rise of the Anthropocene 260
Building Resiliency 262
Geoengineering the Future 266
Chapter Thirteen
Contact: Living by More than Science Alone 272
An Awful Waste of Space 272
More than Science Alone 277
Occam’s Razor 280
What If We’re Not Alone? 283
Chapter Fourteen
Looking to the Future 288
Acknowledgments 293

The ToC gives the reader a pretty clue as to where the author is going with their book and Maynard explains how he chose his movies in his introductory chapter (from Films from the Future),

“There are some quite wonderful science fiction movies that didn’t make the cut because they didn’t fit the overarching narrative (Blade Runner and its sequel Blade Runner 2049, for instance, and the first of the Matrix trilogy). There are also movies that bombed with the critics, but were included because they ably fill a gap in the bigger story around emerging and converging technologies. Ultimately, the movies that made the cut were chosen because, together, they create an overarching narrative around emerging trends in biotechnologies, cybertechnologies, and materials-based technologies, and they illuminate a broader landscape around our evolving relationship with science and technology. And, to be honest, they are all movies that I get a kick out of watching.” (p. 17)

Jurassic Park (Chapter Two)

Dinosaurs do not interest me—they never have. Despite my profound indifference I did see the movie, Jurassic Park, when it was first released (someone talked me into going). And, I am still profoundly indifferent. Thankfully, Dr. Maynard finds meaning and a connection to current trends in biotechnology,

Jurassic Park is unabashedly a movie about dinosaurs. But it’s also a movie about greed, ambition, genetic engineering, and human folly—all rich pickings for thinking about the future, and what could possibly go wrong. (p. 28)

What really stands out with Jurassic Park, over twenty-five years later, is how it reveals a very human side of science and technology. This comes out in questions around when we should tinker with technology and when we should leave well enough alone. But there is also a narrative here that appears time and time again with the movies in this book, and that is how we get our heads around the sometimes oversized roles mega-entrepreneurs play in dictating how new tech is used, and possibly abused. These are all issues that are just as relevant now as they were in 1993, and are front and center of ensuring that the technologyenabled future we’re building is one where we want to live, and not one where we’re constantly fighting for our lives.  (pp. 30-1)

He also describes a connection to current trends in biotechnology,

De-Extinction

In a far corner of Siberia, two Russians—Sergey Zimov and his son Nikita—are attempting to recreate the Ice Age. More precisely, their vision is to reconstruct the landscape and ecosystem of northern Siberia in the Pleistocene, a period in Earth’s history that stretches from around two and a half million years ago to eleven thousand years ago. This was a time when the environment was much colder than now, with huge glaciers and ice sheets flowing over much of the Earth’s northern hemisphere. It was also a time when humans
coexisted with animals that are long extinct, including saber-tooth cats, giant ground sloths, and woolly mammoths.

The Zimovs’ ambitions are an extreme example of “Pleistocene rewilding,” a movement to reintroduce relatively recently extinct large animals, or their close modern-day equivalents, to regions where they were once common. In the case of the Zimovs, the
father-and-son team believe that, by reconstructing the Pleistocene ecosystem in the Siberian steppes and elsewhere, they can slow down the impacts of climate change on these regions. These areas are dominated by permafrost, ground that never thaws through
the year. Permafrost ecosystems have developed and survived over millennia, but a warming global climate (a theme we’ll come back to in chapter twelve and the movie The Day After Tomorrow) threatens to catastrophically disrupt them, and as this happens, the impacts
on biodiversity could be devastating. But what gets climate scientists even more worried is potentially massive releases of trapped methane as the permafrost disappears.

Methane is a powerful greenhouse gas—some eighty times more effective at exacerbating global warming than carbon dioxide— and large-scale releases from warming permafrost could trigger catastrophic changes in climate. As a result, finding ways to keep it in the ground is important. And here the Zimovs came up with a rather unusual idea: maintaining the stability of the environment by reintroducing long-extinct species that could help prevent its destruction, even in a warmer world. It’s a wild idea, but one that has some merit.8 As a proof of concept, though, the Zimovs needed somewhere to start. And so they set out to create a park for deextinct Siberian animals: Pleistocene Park.9

Pleistocene Park is by no stretch of the imagination a modern-day Jurassic Park. The dinosaurs in Hammond’s park date back to the Mesozoic period, from around 250 million years ago to sixty-five million years ago. By comparison, the Pleistocene is relatively modern history, ending a mere eleven and a half thousand years ago. And the vision behind Pleistocene Park is not thrills, spills, and profit, but the serious use of science and technology to stabilize an increasingly unstable environment. Yet there is one thread that ties them together, and that’s using genetic engineering to reintroduce extinct species. In this case, the species in question is warm-blooded and furry: the woolly mammoth.

The idea of de-extinction, or bringing back species from extinction (it’s even called “resurrection biology” in some circles), has been around for a while. It’s a controversial idea, and it raises a lot of tough ethical questions. But proponents of de-extinction argue
that we’re losing species and ecosystems at such a rate that we can’t afford not to explore technological interventions to help stem the flow.

Early approaches to bringing species back from the dead have involved selective breeding. The idea was simple—if you have modern ancestors of a recently extinct species, selectively breeding specimens that have a higher genetic similarity to their forebears can potentially help reconstruct their genome in living animals. This approach is being used in attempts to bring back the aurochs, an ancestor of modern cattle.10 But it’s slow, and it depends on
the fragmented genome of the extinct species still surviving in its modern-day equivalents.

An alternative to selective breeding is cloning. This involves finding a viable cell, or cell nucleus, in an extinct but well-preserved animal and growing a new living clone from it. It’s definitely a more appealing route for impatient resurrection biologists, but it does mean getting your hands on intact cells from long-dead animals and devising ways to “resurrect” these, which is no mean feat. Cloning has potential when it comes to recently extinct species whose cells have been well preserved—for instance, where the whole animal has become frozen in ice. But it’s still a slow and extremely limited option.

Which is where advances in genetic engineering come in.

The technological premise of Jurassic Park is that scientists can reconstruct the genome of long-dead animals from preserved DNA fragments. It’s a compelling idea, if you think of DNA as a massively long and complex instruction set that tells a group of biological molecules how to build an animal. In principle, if we could reconstruct the genome of an extinct species, we would have the basic instruction set—the biological software—to reconstruct
individual members of it.

The bad news is that DNA-reconstruction-based de-extinction is far more complex than this. First you need intact fragments of DNA, which is not easy, as DNA degrades easily (and is pretty much impossible to obtain, as far as we know, for dinosaurs). Then you
need to be able to stitch all of your fragments together, which is akin to completing a billion-piece jigsaw puzzle without knowing what the final picture looks like. This is a Herculean task, although with breakthroughs in data manipulation and machine learning,
scientists are getting better at it. But even when you have your reconstructed genome, you need the biological “wetware”—all the stuff that’s needed to create, incubate, and nurture a new living thing, like eggs, nutrients, a safe space to grow and mature, and so on. Within all this complexity, it turns out that getting your DNA sequence right is just the beginning of translating that genetic code into a living, breathing entity. But in some cases, it might be possible.

In 2013, Sergey Zimov was introduced to the geneticist George Church at a conference on de-extinction. Church is an accomplished scientist in the field of DNA analysis and reconstruction, and a thought leader in the field of synthetic biology (which we’ll come
back to in chapter nine). It was a match made in resurrection biology heaven. Zimov wanted to populate his Pleistocene Park with mammoths, and Church thought he could see a way of
achieving this.

What resulted was an ambitious project to de-extinct the woolly mammoth. Church and others who are working on this have faced plenty of hurdles. But the technology has been advancing so fast that, as of 2017, scientists were predicting they would be able to reproduce the woolly mammoth within the next two years.

One of those hurdles was the lack of solid DNA sequences to work from. Frustratingly, although there are many instances of well preserved woolly mammoths, their DNA rarely survives being frozen for tens of thousands of years. To overcome this, Church and others
have taken a different tack: Take a modern, living relative of the mammoth, and engineer into it traits that would allow it to live on the Siberian tundra, just like its woolly ancestors.

Church’s team’s starting point has been the Asian elephant. This is their source of base DNA for their “woolly mammoth 2.0”—their starting source code, if you like. So far, they’ve identified fifty plus gene sequences they think they can play with to give their modern-day woolly mammoth the traits it would need to thrive in Pleistocene Park, including a coat of hair, smaller ears, and a constitution adapted to cold.

The next hurdle they face is how to translate the code embedded in their new woolly mammoth genome into a living, breathing animal. The most obvious route would be to impregnate a female Asian elephant with a fertilized egg containing the new code. But Asian elephants are endangered, and no one’s likely to allow such cutting edge experimentation on the precious few that are still around, so scientists are working on an artificial womb for their reinvented woolly mammoth. They’re making progress with mice and hope to crack the motherless mammoth challenge relatively soon.

It’s perhaps a stretch to call this creative approach to recreating a species (or “reanimation” as Church refers to it) “de-extinction,” as what is being formed is a new species. … (pp. 31-4)

This selection illustrates what Maynard does so very well throughout the book where he uses each film as a launching pad for a clear, readable description of relevant bits of science so you understand why the premise was likely, unlikely, or pure fantasy while linking it to contemporary practices, efforts, and issues. In the context of Jurassic Park, Maynard goes on to raise some fascinating questions such as: Should we revive animals rendered extinct (due to obsolescence or inability to adapt to new conditions) when we could develop new animals?

General thoughts

‘Films for the Future’ offers readable (to non-scientific types) science, lively writing, and the occasional ‘memorish’ anecdote. As well, Dr. Maynard raises the curtain on aspects of the scientific enterprise that most of us do not get to see.  For example, the meeting  between Sergey Zimov and George Church and how it led to new ‘de-extinction’ work’. He also describes the problems that the scientists encountered and are encountering. This is in direct contrast to how scientific work is usually presented in the news media as one glorious breakthrough after the next.

Maynard does discuss the issues of social inequality and power and ownership. For example, who owns your transplant or data? Puzzlingly, he doesn’t touch on the current environment where scientists in the US and elsewhere are encouraged/pressured to start up companies commercializing their work.

Nor is there any mention of how universities are participating in this grand business experiment often called ‘innovation’. (My March 15, 2017 posting describes an outcome for the CRISPR [gene editing system] patent fight taking place between Harvard University’s & MIT’s [Massachusetts Institute of Technology] Broad Institute vs the University of California at Berkeley and my Sept. 11, 2018 posting about an art/science exhibit in Vancouver [Canada] provides an update for round 2 of the Broad Institute vs. UC Berkeley patent fight [scroll down about 65% of the way.) *To read about how my ‘cultural blindness’ shows up here scroll down to the single asterisk at the end.*

There’s a foray through machine-learning and big data as applied to predictive policing in Maynard’s ‘Minority Report’ chapter (my November 23, 2017 posting describes Vancouver’s predictive policing initiative [no psychics involved], the first such in Canada). There’s no mention of surveillance technology, which if I recall properly was part of the future environment, both by the state and by corporations. (Mia Armstrong’s November 15, 2018 article for Slate on Chinese surveillance being exported to Venezuela provides interesting insight.)

The gaps are interesting and various. This of course points to a problem all science writers have when attempting an overview of science. (Carl Zimmer’s latest, ‘She Has Her Mother’s Laugh: The Powers, Perversions, and Potential of Heredity’] a doorstopping 574 pages, also has some gaps despite his focus on heredity,)

Maynard has worked hard to give an comprehensive overview in a remarkably compact 279 pages while developing his theme about science and the human element. In other words, science is not monolithic; it’s created by human beings and subject to all the flaws and benefits that humanity’s efforts are always subject to—scientists are people too.

The readership for ‘Films from the Future’ spans from the mildly interested science reader to someone like me who’s been writing/blogging about these topics (more or less) for about 10 years. I learned a lot reading this book.

Next time, I’m hopeful there’ll be a next time, Maynard might want to describe the parameters he’s set for his book in more detail that is possible in his chapter headings. He could have mentioned that he’s not a cinéaste so his descriptions of the movies are very much focused on the story as conveyed through words. He doesn’t mention colour palates, camera angles, or, even, cultural lenses.

Take for example, his chapter on ‘Ghost in the Shell’. Focused on the Japanese animation film and not the live action Hollywood version he talks about human enhancement and cyborgs. The Japanese have a different take on robots, inanimate objects, and, I assume, cyborgs than is found in Canada or the US or Great Britain, for that matter (according to a colleague of mine, an Englishwoman who lived in Japan for ten or more years). There’s also the chapter on the Ealing comedy, The Man in The White Suit, an English film from the 1950’s. That too has a cultural (as well as, historical) flavour but since Maynard is from England, he may take that cultural flavour for granted. ‘Never let me go’ in Chapter Two was also a UK production, albeit far more recent than the Ealing comedy and it’s interesting to consider how a UK production about cloning might differ from a US or Chinese or … production on the topic. I am hearkening back to Maynard’s anecdote about movies giving him new ways of seeing and imagining the world.

There’s a corrective. A couple of sentences in Maynard’s introductory chapter cautioning that in depth exploration of ‘cultural lenses’ was not possible without expanding the book to an unreadable size followed by a sentence in each of the two chapters that there are cultural differences.

One area where I had a significant problem was with regard to being “programmed” and having  “instinctual” behaviour,

As a species, we are embarrassingly programmed to see “different” as “threatening,” and to take instinctive action against it. It’s a trait that’s exploited in many science fiction novels and movies, including those in this book. If we want to see the rise of increasingly augmented individuals, we need to be prepared for some social strife. (p. 136)

These concepts are much debated in the social sciences and there are arguments for and against ‘instincts regarding strangers and their possible differences’. I gather Dr. Maynard hies to the ‘instinct to defend/attack’ school of thought.

One final quandary, there was no sex and I was expecting it in the Ex Machina chapter, especially now that sexbots are about to take over the world (I exaggerate). Certainly, if you’re talking about “social strife,” then sexbots would seem to be fruitful line of inquiry, especially when there’s talk of how they could benefit families (my August 29, 2018 posting). Again, there could have been a sentence explaining why Maynard focused almost exclusively in this chapter on the discussions about artificial intelligence and superintelligence.

Taken in the context of the book, these are trifling issues and shouldn’t stop you from reading Films from the Future. What Maynard has accomplished here is impressive and I hope it’s just the beginning.

Final note

Bravo Andrew! (Note: We’ve been ‘internet acquaintances/friends since the first year I started blogging. When I’m referring to him in his professional capacity, he’s Dr. Maynard and when it’s not strictly in his professional capacity, it’s Andrew. For this commentary/review I wanted to emphasize his professional status.)

If you need to see a few more samples of Andrew’s writing, there’s a Nov. 15, 2018 essay on The Conversation, Sci-fi movies are the secret weapon that could help Silicon Valley grow up and a Nov. 21, 2018 article on slate.com, The True Cost of Stain-Resistant Pants; The 1951 British comedy The Man in the White Suit anticipated our fears about nanotechnology. Enjoy.

****Added at 1700 hours on Nov. 22, 2018: You can purchase Films from the Future here.

*Nov. 23, 2018: I should have been more specific and said ‘academic scientists’. In Canada, the great percentage of scientists are academic. It’s to the point where the OECD (Organization for Economic Cooperation and Development) has noted that amongst industrialized countries, Canada has very few industrial scientists in comparison to the others.

Colo(u)r-changing bandage for better compression

This is a structural colo(u)r story, from a May 29, 2018 news item on Nanowerk,

Compression therapy is a standard form of treatment for patients who suffer from venous ulcers and other conditions in which veins struggle to return blood from the lower extremities. Compression stockings and bandages, wrapped tightly around the affected limb, can help to stimulate blood flow. But there is currently no clear way to gauge whether a bandage is applying an optimal pressure for a given condition.

Now engineers at MIT {Massachusetts Institute of Technology] have developed pressure-sensing photonic fibers that they have woven into a typical compression bandage. As the bandage is stretched, the fibers change color. Using a color chart, a caregiver can stretch a bandage until it matches the color for a desired pressure, before, say, wrapping it around a patient’s leg.

The photonic fibers can then serve as a continuous pressure sensor — if their color changes, caregivers or patients can use the color chart to determine whether and to what degree the bandage needs loosening or tightening.

A May 29, 2018 MIT news release (also on EurekAlert), which originated the news item, provides more detail,

“Getting the pressure right is critical in treating many medical conditions including venous ulcers, which affect several hundred thousand patients in the U.S. each year,” says Mathias Kolle, assistant professor of mechanical engineering at MIT. “These fibers can provide information about the pressure that the bandage exerts. We can design them so that for a specific desired pressure, the fibers reflect an easily distinguished color.”

Kolle and his colleagues have published their results in the journal Advanced Healthcare Materials. Co-authors from MIT include first author Joseph Sandt, Marie Moudio, and Christian Argenti, along with J. Kenji Clark of the Univeristy of Tokyo, James Hardin of the United States Air Force Research Laboratory, Matthew Carty of Brigham and Women’s Hospital-Harvard Medical School, and Jennifer Lewis of Harvard University.

Natural inspiration

The color of the photonic fibers arises not from any intrinsic pigmentation, but from their carefully designed structural configuration. Each fiber is about 10 times the diameter of a human hair. The researchers fabricated the fiber from ultrathin layers of transparent rubber materials, which they rolled up to create a jelly-roll-type structure. Each layer within the roll is only a few hundred nanometers thick.

In this rolled-up configuration, light reflects off each interface between individual layers. With enough layers of consistent thickness, these reflections interact to strengthen some colors in the visible spectrum, for instance red, while diminishing the brightness of other colors. This makes the fiber appear a certain color, depending on the thickness of the layers within the fiber.

“Structural color is really neat, because you can get brighter, stronger colors than with inks or dyes just by using particular arrangements of transparent materials,” Sandt says. “These colors persist as long as the structure is maintained.”

The fibers’ design relies upon an optical phenomenon known as “interference,” in which light, reflected from a periodic stack of thin, transparent layers, can produce vibrant colors that depend on the stack’s geometric parameters and material composition. Optical interference is what produces colorful swirls in oily puddles and soap bubbles. It’s also what gives peacocks and butterflies their dazzling, shifting shades, as their feathers and wings are made from similarly periodic structures.

“My interest has always been in taking interesting structural elements that lie at the origin of nature’s most dazzling light manipulation strategies, to try recreating and employing them in useful applications,” Kolle says.

A multilayered approach

The team’s approach combines known optical design concepts with soft materials, to create dynamic photonic materials.

While a postdoc at Harvard in the group of Professor Joanna Aizenberg, Kolle was inspired by the work of Pete Vukusic, professor of biophotonics at the University of Exeter in the U.K., on Margaritaria nobilis, a tropical plant that produces extremely shiny blue berries. The fruits’ skin is made up of cells with a periodic cellulose structure, through which light can reflect to give the fruit its signature metallic blue color.

Together, Kolle and Vukusic sought ways to translate the fruit’s photonic architecture into a useful synthetic material. Ultimately, they fashioned multilayered fibers from stretchable materials, and assumed that stretching the fibers would change the individual layers’ thicknesses, enabling them to tune the fibers’ color. The results of these first efforts were published in Advanced Materials in 2013.

When Kolle joined the MIT faculty in the same year, he and his group, including Sandt, improved on the photonic fiber’s design and fabrication. In their current form, the fibers are made from layers of commonly used and widely available transparent rubbers, wrapped around highly stretchable fiber cores. Sandt fabricated each layer using spin-coating, a technique in which a rubber, dissolved into solution, is poured onto a spinning wheel. Excess material is flung off the wheel, leaving a thin, uniform coating, the thickness of which can be determined by the wheel’s speed.

For fiber fabrication, Sandt formed these two layers on top of a water-soluble film on a silicon wafer. He then submerged the wafer, with all three layers, in water to dissolve the water-soluble layer, leaving the two rubbery layers floating on the water’s surface. Finally, he carefully rolled the two transparent layers around a black rubber fiber, to produce the final colorful photonic fiber.

Reflecting pressure

The team can tune the thickness of the fibers’ layers to produce any desired color tuning, using standard optical modeling approaches customized for their fiber design.

“If you want a fiber to go from yellow to green, or blue, we can say, ‘This is how we have to lay out the fiber to give us this kind of [color] trajectory,'” Kolle says. “This is powerful because you might want to have something that reflects red to show a dangerously high strain, or green for ‘ok.’ We have that capacity.”

The team fabricated color-changing fibers with a tailored, strain-dependent color variation using the theoretical model, and then stitched them along the length of a conventional compression bandage, which they previously characterized to determine the pressure that the bandage generates when it’s stretched by a certain amount.

The team used the relationship between bandage stretch and pressure, and the correlation between fiber color and strain, to draw up a color chart, matching a fiber’s color (produced by a certain amount of stretching) to the pressure that is generated by the bandage.

To test the bandage’s effectiveness, Sandt and Moudio enlisted over a dozen student volunteers, who worked in pairs to apply three different compression bandages to each other’s legs: a plain bandage, a bandage threaded with photonic fibers, and a commercially-available bandage printed with rectangular patterns. This bandage is designed so that when it is applying an optimal pressure, users should see that the rectangles become squares.

Overall, the bandage woven with photonic fibers gave the clearest pressure feedback. Students were able to interpret the color of the fibers, and based on the color chart, apply a corresponding optimal pressure more accurately than either of the other bandages.

The researchers are now looking for ways to scale up the fiber fabrication process. Currently, they are able to make fibers that are several inches long. Ideally, they would like to produce meters or even kilometers of such fibers at a time.

“Currently, the fibers are costly, mostly because of the labor that goes into making them,” Kolle says. “The materials themselves are not worth much. If we could reel out kilometers of these fibers with relatively little work, then they would be dirt cheap.”

Then, such fibers could be threaded into bandages, along with textiles such as athletic apparel and shoes as color indicators for, say, muscle strain during workouts. Kolle envisions that they may also be used as remotely readable strain gauges for infrastructure and machinery.

“Of course, they could also be a scientific tool that could be used in a broader context, which we want to explore,” Kolle says.

Here’s what the bandage looks like,

Caption: Engineers at MIT have developed pressure-sensing photonic fibers that they have woven into a typical compression bandage. Credit Courtesy of the researchers

Here’s a link to and a citation for the paper,

Stretchable Optomechanical Fiber Sensors for Pressure Determination in Compressive Medical Textiles by Joseph D. Sandt, Marie Moudio, J. Kenji Clark, James Hardin, Christian Argenti, Matthew Carty, Jennifer A. Lewis, Mathias Kolle. Advanced Healthcare Materials https://doi.org/10.1002/adhm.201800293 First published: 29 May 2018

This paper is behind a paywall.

Café Scientifique Vancouver (Canada) talk on October 30th, 2018: Solving some of Canada’s grandest challenges with synthetic biology

From an October 16, 2018 Café Scientifique Vancouver announcement (received via email),

Our next café will happen on TUESDAY, OCTOBER 30TH at 7:30PM in the
back room at YAGGER’S DOWNTOWN (433 W Pender). Our speaker for the
evening will be DR. VIKRAMADITYA G. YADAV. His topic will be:

SOLVING SOME OF CANADA’S GRANDEST CHALLENGES WITH SYNTHETIC BIOLOGY

A warming climate, unrepressed mining and logging, contamination of our
water resources, the uncertain price and tight supply of crude oil and
the growing threat of epidemics are having a profound, negative impact
on the well-being of Canadians. There is an urgent need to develop and
implement sustainable manufacturing technologies that can not only meet
our material and energy needs, but also sustain our quality of life.
Romantic and unbelievable as it sounds, Nature posses all the answer to
our challenges, and the coming decades in science and engineering will
be typified by our attempts to mimic or recruit biology to address our
needs. This talk will present a vivid snapshot of current and emerging
research towards this goal and highlight some cutting-edge technologies
under development at the University of British Columbia [UBC].

When he joined the University of Waterloo as an undergraduate student in
chemical engineering, Dr. Vikramaditya G. Yadav coveted a career in
Alberta’s burgeoning petrochemical sector. He even interned at Imperial
Oil during his first summer break from university. Then, one fine
evening during second year, he stumbled upon a copy of Juan Enríquez’s
As the Future Catches You in the library and became instantly captivated
with biological engineering. His journey over the past few years has
taken him to Sanofi Pasteur [vaccines division of the multinational
pharmaceutical
company Sanofi], the Massachusetts Institute of Technology [MIT],
Harvard University, and finally, the University of British Columbia,
where he now leads a wonderful group of researchers working on
wide-ranging topics at the interface of biology, chemistry, engineering,
medicine and economics.

We hope to see you there!

Oftentimes, the speaker is asked to write up a description of their talk and assuming that’s the case and based on how it’s written, I’d say the odds are good that this will be a lively, engaging talk.

For more proof, you can check out Dr. Yadav’s description of his research interests on his UBC profile page. BTW, his research group is called The Biofoundry (at UBC).

A 3D printed eye cornea and a 3D printed copy of your brain (also: a Brad Pitt connection)

Sometimes it’s hard to keep up with 3D tissue printing news. I have two news bits, one concerning eyes and another concerning brains.

3D printed human corneas

A May 29, 2018 news item on ScienceDaily trumpets the news,

The first human corneas have been 3D printed by scientists at Newcastle University, UK.

It means the technique could be used in the future to ensure an unlimited supply of corneas.

As the outermost layer of the human eye, the cornea has an important role in focusing vision.

Yet there is a significant shortage of corneas available to transplant, with 10 million people worldwide requiring surgery to prevent corneal blindness as a result of diseases such as trachoma, an infectious eye disorder.

In addition, almost 5 million people suffer total blindness due to corneal scarring caused by burns, lacerations, abrasion or disease.

The proof-of-concept research, published today [May 29, 2018] in Experimental Eye Research, reports how stem cells (human corneal stromal cells) from a healthy donor cornea were mixed together with alginate and collagen to create a solution that could be printed, a ‘bio-ink’.

Here are the proud researchers with their cornea,

Caption: Dr. Steve Swioklo and Professor Che Connon with a dyed cornea. Credit: Newcastle University, UK

A May 30,2018 Newcastle University press release (also on EurekAlert but published on May 29, 2018), which originated the news item, adds more details,

Using a simple low-cost 3D bio-printer, the bio-ink was successfully extruded in concentric circles to form the shape of a human cornea. It took less than 10 minutes to print.

The stem cells were then shown to culture – or grow.

Che Connon, Professor of Tissue Engineering at Newcastle University, who led the work, said: “Many teams across the world have been chasing the ideal bio-ink to make this process feasible.

“Our unique gel – a combination of alginate and collagen – keeps the stem cells alive whilst producing a material which is stiff enough to hold its shape but soft enough to be squeezed out the nozzle of a 3D printer.

“This builds upon our previous work in which we kept cells alive for weeks at room temperature within a similar hydrogel. Now we have a ready to use bio-ink containing stem cells allowing users to start printing tissues without having to worry about growing the cells separately.”

The scientists, including first author and PhD student Ms Abigail Isaacson from the Institute of Genetic Medicine, Newcastle University, also demonstrated that they could build a cornea to match a patient’s unique specifications.

The dimensions of the printed tissue were originally taken from an actual cornea. By scanning a patient’s eye, they could use the data to rapidly print a cornea which matched the size and shape.

Professor Connon added: “Our 3D printed corneas will now have to undergo further testing and it will be several years before we could be in the position where we are using them for transplants.

“However, what we have shown is that it is feasible to print corneas using coordinates taken from a patient eye and that this approach has potential to combat the world-wide shortage.”

Here’s a link to and a citation for the paper,

3D bioprinting of a corneal stroma equivalent by Abigail Isaacson, Stephen Swioklo, Che J. Connon. Experimental Eye Research Volume 173, August 2018, Pages 188–193 and 2018 May 14 pii: S0014-4835(18)30212-4. doi: 10.1016/j.exer.2018.05.010. [Epub ahead of print]

This paper is behind a paywall.

A 3D printed copy of your brain

I love the title for this May 30, 2018 Wyss Institute for Biologically Inspired Engineering news release: Creating piece of mind by Lindsay Brownell (also on EurekAlert),

What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, Ph.D., who had a baseball-sized tumor removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group. Curious to see what his brain actually looked like before the tumor was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI [magnetic resonance imaging] and CT [computed tomography] scans, but was frustrated that existing methods were prohibitively time-intensive, cumbersome, and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?” says Ahmed Hosny, who was a Research Fellow with at the Wyss Institute at the time and is now a machine learning engineer at the Dana-Farber Cancer Institute. The result of that impromptu collaboration – which grew to involve James Weaver, Ph.D., Senior Research Scientist at the Wyss Institute; Neri Oxman, [emphasis mine] Ph.D., Director of the MIT Media Lab’s Mediated Matter group and Associate Professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centers in the US and Germany – is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail. The research is reported in 3D Printing and Additive Manufacturing.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, M.D. Ph.D., an Assistant Professor of Radiology at the University of Washington and clinical radiologist at the Seattle VA, and co-author of the paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labor currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Imaging technologies like MRI and CT scans produce high-resolution images as a series of “slices” that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object(s) of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either a very time-intensive process called “segmentation” where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic “thresholding” process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of gray that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over- or under-exaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors gives medical professionals the best of both worlds, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of gray rather than the pixels themselves varying in color.

Similar to the way images in black-and-white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of gray into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

The team of researchers used bitmap-based 3D printing to create models of Keating’s brain and tumor that faithfully preserved all of the gradations of detail present in the raw MRI data down to a resolution that is on par with what the human eye can distinguish from about 9-10 inches away. Using this same approach, they were also able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it also saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional – we were able to do it in less than an hour.”

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to their 3D printer’s intrinsic bitmap printing capabilities. New software packages also still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that sometime within the next 5 years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his MRI scans to see how his skull is healing post-surgery and check on his brain to make sure his tumor isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, is incredibly empowering,” he says.

“Curiosity is one of the biggest drivers of innovation and change for the greater good, especially when it involves exploring questions across disciplines and institutions. The Wyss Institute is proud to be a space where this kind of cross-field innovation can flourish,” says Wyss Institute Founding Director Donald Ingber, M.D., Ph.D., who is also the Judah Folkman Professor of Vascular Biology at Harvard Medical School (HMS) and the Vascular Biology Program at Boston Children’s Hospital, as well as Professor of Bioengineering at Harvard’s John A. Paulson School of Engineering and Applied Sciences (SEAS).

Here’s an image illustrating the work,

Caption: This 3D-printed model of Steven Keating’s skull and brain clearly shows his brain tumor and other fine details thanks to the new data processing method pioneered by the study’s authors. Credit: Wyss Institute at Harvard University

Here’s a link to and a citation for the paper,

From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets by Ahmed Hosny , Steven J. Keating, Joshua D. Dilley, Beth Ripley, Tatiana Kelil, Steve Pieper, Dominik Kolb, Christoph Bader, Anne-Marie Pobloth, Molly Griffin, Reza Nezafat, Georg Duda, Ennio A. Chiocca, James R.. Stone, James S. Michaelson, Mason N. Dean, Neri Oxman, and James C. Weaver. 3D Printing and Additive Manufacturing http://doi.org/10.1089/3dp.2017.0140 Online Ahead of Print:May 29, 2018

This paper appears to be open access.

A tangential Brad Pitt connection

It’s a bit of Hollywood gossip. There was some speculation in April 2018 that Brad Pitt was dating Dr. Neri Oxman highlighted in the Wyss Institute news release. Here’s a sample of an April 13, 2018 posting on Laineygossip (Note: A link has been removed),

It took him a long time to date, but he is now,” the insider tells PEOPLE. “He likes women who challenge him in every way, especially in the intellect department. Brad has seen how happy and different Amal has made his friend (George Clooney). It has given him something to think about.”

While a Pitt source has maintained he and Oxman are “just friends,” they’ve met up a few times since the fall and the insider notes Pitt has been flying frequently to the East Coast. He dropped by one of Oxman’s classes last fall and was spotted at MIT again a few weeks ago.

Pitt and Oxman got to know each other through an architecture project at MIT, where she works as a professor of media arts and sciences at the school’s Media Lab. Pitt has always been interested in architecture and founded the Make It Right Foundation, which builds affordable and environmentally friendly homes in New Orleans for people in need.

“One of the things Brad has said all along is that he wants to do more architecture and design work,” another source says. “He loves this, has found the furniture design and New Orleans developing work fulfilling, and knows he has a talent for it.”

It’s only been a week since Page Six first broke the news that Brad and Dr Oxman have been spending time together.

I’m fascinated by Oxman’s (and her colleagues’) furniture. Rose Brook writes about one particular Oxman piece in her March 27, 2014 posting for TCT magazine (Note: Links have been removed),

MIT Professor and 3D printing forerunner Neri Oxman has unveiled her striking acoustic chaise longue, which was made using Stratasys 3D printing technology.

Oxman collaborated with Professor W Craig Carter and Composer and fellow MIT Professor Tod Machover to explore material properties and their spatial arrangement to form the acoustic piece.

Christened Gemini, the two-part chaise was produced using a Stratasys Objet500 Connex3 multi-colour, multi-material 3D printer as well as traditional furniture-making techniques and it will be on display at the Vocal Vibrations exhibition at Le Laboratoire in Paris from March 28th 2014.

An Architect, Designer and Professor of Media, Arts and Science at MIT, Oxman’s creation aims to convey the relationship of twins in the womb through material properties and their arrangement. It was made using both subtractive and additive manufacturing and is part of Oxman’s ongoing exploration of what Stratasys’ ground-breaking multi-colour, multi-material 3D printer can do.

Brook goes on to explain how the chaise was made and the inspiration that led to it. Finally, it’s interesting to note that Oxman was working with Stratasys in 2014 and that this 2018 brain project is being developed in a joint collaboration with Statasys.

That’s it for 3D printing today.

AI (artificial intelligence) for Good Global Summit from May 15 – 17, 2018 in Geneva, Switzerland: details and an interview with Frederic Werner

With all the talk about artificial intelligence (AI), a lot more attention seems to be paid to apocalyptic scenarios: loss of jobs, financial hardship, loss of personal agency and privacy, and more with all of these impacts being described as global. Still, there are some folks who are considering and working on ‘AI for good’.

If you’d asked me, the International Telecommunications Union (ITU) would not have been my first guess (my choice would have been United Nations Educational, Scientific and Cultural Organization [UNESCO]) as an agency likely to host the 2018 AI for Good Global Summit. But, it turns out the ITU is a UN (United Nations agency) and, according to its Wikipedia entry, it’s an intergovernmental public-private partnership, which may explain the nature of the participants in the upcoming summit.

The news

First, there’s a May 4, 2018 ITU media advisory (received via email or you can find the full media advisory here) about the upcoming summit,

Artificial Intelligence (AI) is now widely identified as being able to address the greatest challenges facing humanity – supporting innovation in fields ranging from crisis management and healthcare to smart cities and communications networking.

The second annual ‘AI for Good Global Summit’ will take place 15-17 May [2018] in Geneva, and seeks to leverage AI to accelerate progress towards the United Nations’ Sustainable Development Goals and ultimately benefit humanity.

WHAT: Global event to advance ‘AI for Good’ with the participation of internationally recognized AI experts. The programme will include interactive high-level panels, while ‘AI Breakthrough Teams’ will propose AI strategies able to create impact in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society – through interactive sessions. The summit will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

A special demo & exhibit track will feature innovative applications of AI designed to: protect women from sexual violence, avoid infant crib deaths, end child abuse, predict oral cancer, and improve mental health treatments for depression – as well as interactive robots including: Alice, a Dutch invention designed to support the aged; iCub, an open-source robot; and Sophia, the humanoid AI robot.

WHEN: 15-17 May 2018, beginning daily at 9 AM

WHERE: ITU Headquarters, 2 Rue de Varembé, Geneva, Switzerland (Please note: entrance to ITU is now limited for all visitors to the Montbrillant building entrance only on rue Varembé).

WHO: Confirmed participants to date include expert representatives from: Association for Computing Machinery, Bill and Melinda Gates Foundation, Cambridge University, Carnegie Mellon, Chan Zuckerberg Initiative, Consumer Trade Association, Facebook, Fraunhofer, Google, Harvard University, IBM Watson, IEEE, Intellectual Ventures, ITU, Microsoft, Massachusetts Institute of Technology (MIT), Partnership on AI, Planet Labs, Shenzhen Open Innovation Lab, University of California at Berkeley, University of Tokyo, XPRIZE Foundation, Yale University – and the participation of “Sophia” the humanoid robot and “iCub” the EU open source robotcub.

The interview

Frederic Werner, Senior Communications Officer at the International Telecommunication Union and** one of the organizers of the AI for Good Global Summit 2018 kindly took the time to speak to me and provide a few more details about the upcoming event.

Werner noted that the 2018 event grew out of a much smaller 2017 ‘workshop’ and first of its kind, about beneficial AI which this year has ballooned in size to 91 countries (about 15 participants are expected from Canada), 32 UN agencies, and substantive representation from the private sector. The 2017 event featured Dr. Yoshua Bengio of the University of Montreal  (Université de Montréal) was a featured speaker.

“This year, we’re focused on action-oriented projects that will help us reach our Sustainable Development Goals (SDGs) by 2030. We’re looking at near-term practical AI applications,” says Werner. “We’re matchmaking problem-owners and solution-owners.”

Academics, industry professionals, government officials, and representatives from UN agencies are gathering  to work on four tracks/themes:

In advance of this meeting, the group launched an AI repository (an action item from the 2017 meeting) on April 25, 2018 inviting people to list their AI projects (from the ITU’s April 25, 2018? AI repository news announcement),

ITU has just launched an AI Repository where anyone working in the field of artificial intelligence (AI) can contribute key information about how to leverage AI to help solve humanity’s greatest challenges.

This is the only global repository that identifies AI-related projects, research initiatives, think-tanks and organizations that aim to accelerate progress on the 17 United Nations’ Sustainable Development Goals (SDGs).

To submit a project, just press ‘Submit’ on the AI Repository site and fill in the online questionnaire, providing all relevant details of your project. You will also be asked to map your project to the relevant World Summit on the Information Society (WSIS) action lines and the SDGs. Approved projects will be officially registered in the repository database.

Benefits of participation on the AI Repository include:

WSIS Prizes recognize individuals, governments, civil society, local, regional and international agencies, research institutions and private-sector companies for outstanding success in implementing development oriented strategies that leverage the power of AI and ICTs.

Creating the AI Repository was one of the action items of last year’s AI for Good Global Summit.

We are looking forward to your submissions.

If you have any questions, please send an email to: ai@itu.int

“Your project won’t be visible immediately as we have to vet the submissions to weed out spam-type material and projects that are not in line with our goals,” says Werner. That said, there are already 29 projects in the repository. As you might expect, the UK, China, and US are in the repository but also represented are Egypt, Uganda, Belarus, Serbia, Peru, Italy, and other countries not commonly cited when discussing AI research.

Werner also pointed out in response to my surprise over the ITU’s role with regard to this AI initiative that the ITU is the only UN agency which has 192* member states (countries), 150 universities, and over 700 industry members as well as other member entities, which gives them tremendous breadth of reach. As well, the organization, founded originally in 1865 as the International Telegraph Convention, has extensive experience with global standardization in the information technology and telecommunications industries. (See more in their Wikipedia entry.)

Finally

There is a bit more about the summit on the ITU’s AI for Good Global Summit 2018 webpage,

The 2nd edition of the AI for Good Global Summit will be organized by ITU in Geneva on 15-17 May 2018, in partnership with XPRIZE Foundation, the global leader in incentivized prize competitions, the Association for Computing Machinery (ACM) and sister United Nations agencies including UNESCO, UNICEF, UNCTAD, UNIDO, Global Pulse, UNICRI, UNODA, UNIDIR, UNODC, WFP, IFAD, UNAIDS, WIPO, ILO, UNITAR, UNOPS, OHCHR, UN UniversityWHO, UNEP, ICAO, UNDP, The World Bank, UN DESA, CTBTOUNISDRUNOG, UNOOSAUNFPAUNECE, UNDPA, and UNHCR.

The AI for Good series is the leading United Nations platform for dialogue on AI. The action​​-oriented 2018 summit will identify practical applications of AI and supporting strategies to improve the quality and sustainability of life on our planet. The summit will continue to formulate strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

While the 2017 summit sparked the first ever inclusive global dialogue on beneficial AI, the action-oriented 2018 summit will focus on impactful AI solutions able to yield long-term benefits and help achieve the Sustainable Development Goals. ‘Breakthrough teams’ will demonstrate the potential of AI to map poverty and aid with natural disasters using satellite imagery, how AI could assist the delivery of citizen-centric services in smart cities, and new opportunities for AI to help achieve Universal Health Coverage, and finally to help achieve transparency and explainability in AI algorithms.

Teams will propose impactful AI strategies able to be enacted in the near term, guided by an expert audience of mentors representing government, industry, academia and civil society. Strategies will be evaluated by the mentors according to their feasibility and scalability, potential to address truly global challenges, degree of supporting advocacy, and applicability to market failures beyond the scope of government and industry. The exercise will connect AI innovators with public and private-sector decision-makers, building collaboration to take promising strategies forward.

“As the UN specialized agency for information and communication technologies, ITU is well placed to guide AI innovation towards the achievement of the UN Sustainable Development ​Goals. We are providing a neutral close quotation markplatform for international dialogue aimed at ​building a ​common understanding of the capabilities of emerging AI technologies.​​” Houlin Zhao, Secretary General ​of ITU​

Should you be close to Geneva, it seems that registration is still open. Just go to the ITU’s AI for Good Global Summit 2018 webpage, scroll the page down to ‘Documentation’ and you will find a link to the invitation and a link to online registration. Participation is free but I expect that you are responsible for your travel and accommodation costs.

For anyone unable to attend in person, the summit will be livestreamed (webcast in real time) and you can watch the sessions by following the link below,

https://www.itu.int/en/ITU-T/AI/2018/Pages/webcast.aspx

For those of us on the West Coast of Canada and other parts distant to Geneva, you will want to take the nine hour difference between Geneva (Switzerland) and here into account when viewing the proceedings. If you can’t manage the time difference, the sessions are being recorded and will be posted at a later date.

*’132 member states’ corrected to ‘192 member states’ on May 11, 2018 at 1500 hours PDT.

*Redundant ‘and’ removed on July 19, 2018.

Santiago Ramón y Cajal and the butterflies of the soul

The Cajal exhibit of drawings was here in Vancouver (Canada) this last fall (2017) and I still carry the memory of that glorious experience (see my Sept. 11, 2017 posting for more about the show and associated events). It seems Cajal’s drawings had a similar response in New York city, from a January 18, 2018 article by Roberta Smith for the New York Times,

It’s not often that you look at an exhibition with the help of the very apparatus that is its subject. But so it is with “The Beautiful Brain: The Drawings of Santiago Ramón y Cajal” at the Grey Art Gallery at New York University, one of the most unusual, ravishing exhibitions of the season.

The show finished its run on March 31, 2018 and is now on its way to the Massachusetts Institute of Technology (MIT) in Boston, Massachusetts for its opening on May 3, 2018. It looks like they have an exciting lineup of events to go along with the exhibit (from MIT’s The Beautiful Brain: The Drawings of Santiago Ramón y Cajal exhibit and event page),

SUMMER PROGRAMS

ONGOING

Spotlight Tours
Explorations led by local and Spanish scientists, artists, and entrepreneurs who will share their unique perspectives on particular aspects of the exhibition. (2:00 pm on select Tuesdays and Saturdays)

Tue, May 8 – Mark Harnett, Fred and Carole Middleton Career Development Professor at MIT and McGovern Institute Investigator Sat, May 26 – Marion Boulicault, MIT Graduate Student and Neuroethics Fellow in the Center for Sensorimotor Neural Engineering Tue, June 5 – Kelsey Allen, Graduate researcher, MIT Center for Brains, Minds, and Machines Sat, Jun 23 – Francisco Martin-Martinez, Research Scientist in MIT’s Laboratory for Atomistic & Molecular Mechanics and President of the Spanish Foundation for Science and Technology Jul 21 – Alex Gomez-Marin, Principal Investigator of the Behavior of Organisms Laboratory in the Instituto de Neurociencias, Spain Tue, Jul 31– Julie Pryor, Director of Communications at the McGovern Institute for Brain Research at MIT Tue, Aug 28 – Satrajit Ghosh, Principal Research Scientist at the McGovern Institute for Brain Research at MIT, Assistant Professor in the Department of Otolaryngology at Harvard Medical School, and faculty member in the Speech and Hearing Biosciences and Technology program in the Harvard Division of Medical Sciences

Idea Hub
Drop in and explore expansion microscopy in our maker-space.

Visualizing Science Workshop
Experiential learning with micro-scale biological images. (pre-registration required)

Gallery Demonstrations
Researchers share the latest on neural anatomy, signal transmission, and modern imaging techniques.

EVENTS

Teen Science Café: Mindful Matters
MIT researchers studying the brain share their mind-blowing findings.

Neuron Paint Night
Create a painting of cerebral cortex neurons and learn about the EyeWire citizen science game.

Cerebral Cinema Series
Hear from researchers and then compare real science to depictions on the big screen.

Brainy Trivia
Test your brain power in a night of science trivia and short, snappy research talks.

Come back to see our exciting lineup for the fall!

If you don’t have a chance to see the show or if you’d like a preview, I encourage you to read Smith’s article as it has embedded several Cajal drawings and rendered them exceptionally well.

For those who like a little contemporary (and related) science with their art, there’s a March 30, 2018 Harvard Medical Schoo (HMS)l news release by Kevin Jang (also on EurekAlert), Note: All links save one have been removed,

Drawing of the cells of the chick cerebellum by Santiago Ramón y Cajal, from “Estructura de los centros nerviosos de las aves,” Madrid, circa 1905

 

Modern neuroscience, for all its complexity, can trace its roots directly to a series of pen-and-paper sketches rendered by Nobel laureate Santiago Ramón y Cajal in the late 19th and early 20th centuries.

His observations and drawings exposed the previously hidden composition of the brain, revealing neuronal cell bodies and delicate projections that connect individual neurons together into intricate networks.

As he explored the nervous systems of various organisms under his microscope, a natural question arose: What makes a human brain different from the brain of any other species?

At least part of the answer, Ramón y Cajal hypothesized, lay in a specific class of neuron—one found in a dazzling variety of shapes and patterns of connectivity, and present in higher proportions in the human brain than in the brains of other species. He dubbed them the “butterflies of the soul.”

Known as interneurons, these cells play critical roles in transmitting information between sensory and motor neurons, and, when defective, have been linked to diseases such as schizophrenia, autism and intellectual disability.

Despite more than a century of study, however, it remains unclear why interneurons are so diverse and what specific functions the different subtypes carry out.

Now, in a study published in the March 22 [2018] issue of Nature, researchers from Harvard Medical School, New York Genome Center, New York University and the Broad Institute of MIT and Harvard have detailed for the first time how interneurons emerge and diversify in the brain.

Using single-cell analysis—a technology that allows scientists to track cellular behavior one cell at a time—the team traced the lineage of interneurons from their earliest precursor states to their mature forms in mice. The researchers identified key genetic programs that determine the fate of developing interneurons, as well as when these programs are switched on or off.

The findings serve as a guide for efforts to shed light on interneuron function and may help inform new treatment strategies for disorders involving their dysfunction, the authors said.

“We knew more than 100 years ago that this huge diversity of morphologically interesting cells existed in the brain, but their specific individual roles in brain function are still largely unclear,” said co-senior author Gordon Fishell, HMS professor of neurobiology and a faculty member at the Stanley Center for Psychiatric Research at the Broad.

“Our study provides a road map for understanding how and when distinct interneuron subtypes develop, giving us unprecedented insight into the biology of these cells,” he said. “We can now investigate interneuron properties as they emerge, unlock how these important cells function and perhaps even intervene when they fail to develop correctly in neuropsychiatric disease.”

A hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative CommonsA hippocampal interneuron. Image: Biosciences Imaging Gp, Soton, Wellcome Trust via Creative Commons

Origins and Fates

In collaboration with co-senior author Rahul Satija, core faculty member of the New York Genome Center, Fishell and colleagues analyzed brain regions in developing mice known to contain precursor cells that give rise to interneurons.

Using Drop-seq, a single-cell sequencing technique created by researchers at HMS and the Broad, the team profiled gene expression in thousands of individual cells at multiple time points.

This approach overcomes a major limitation in past research, which could analyze only the average activity of mixtures of many different cells.

In the current study, the team found that the precursor state of all interneurons had similar gene expression patterns despite originating in three separate brain regions and giving rise to 14 or more interneuron subtypes alone—a number still under debate as researchers learn more about these cells.

“Mature interneuron subtypes exhibit incredible diversity. Their morphology and patterns of connectivity and activity are so different from each other, but our results show that the first steps in their maturation are remarkably similar,” said Satija, who is also an assistant professor of biology at New York University.

“They share a common developmental trajectory at the earliest stages, but the seeds of what will cause them to diverge later—a handful of genes—are present from the beginning,” Satija said.

As they profiled cells at later stages in development, the team observed the initial emergence of four interneuron “cardinal” classes, which give rise to distinct fates. Cells were committed to these fates even in the early embryo. By developing a novel computational strategy to link precursors with adult subtypes, the researchers identified individual genes that were switched on and off when cells began to diversify.

For example, they found that the gene Mef2c—mutations of which are linked to Alzheimer’s disease, schizophrenia and neurodevelopmental disorders in humans—is an early embryonic marker for a specific interneuron subtype known as Pvalb neurons. When they deleted Mef2c in animal models, Pvalb neurons failed to develop.

These early genes likely orchestrate the execution of subsequent genetic subroutines, such as ones that guide interneuron subtypes as they migrate to different locations in the brain and ones that help form unique connection patterns with other neural cell types, the authors said.

The identification of these genes and their temporal activity now provide researchers with specific targets to investigate the precise functions of interneurons, as well as how neurons diversify in general, according to the authors.

“One of the goals of this project was to address an incredibly fascinating developmental biology question, which is how individual progenitor cells decide between different neuronal fates,” Satija said. “In addition to these early markers of interneuron divergence, we found numerous additional genes that increase in expression, many dramatically, at later time points.”

The association of some of these genes with neuropsychiatric diseases promises to provide a better understanding of these disorders and the development of therapeutic strategies to treat them, a particularly important notion given the paucity of new treatments, the authors said.

Over the past 50 years, there have been no fundamentally new classes of neuropsychiatric drugs, only newer versions of old drugs, the researchers pointed out.

“Our repertoire is no better than it was in the 1970s,” Fishell said.

“Neuropsychiatric diseases likely reflect the dysfunction of very specific cell types. Our study puts forward a clear picture of what cells to look at as we work to shed light on the mechanisms that underlie these disorders,” Fishell said. “What we will find remains to be seen, but we have new, strong hypotheses that we can now test.”

As a resource for the research community, the study data and software are open-source and freely accessible online.

A gallery of the drawings of Santiago Ramón y Cajal is currently on display in New York City, and will open at the MIT Museum in Boston in May 2018.

Christian Mayer, Christoph Hafemeister and Rachel Bandler served as co-lead authors on the study.

This work was supported by the National Institutes of Health (R01 NS074972, R01 NS081297, MH071679-12, DP2-HG-009623, F30MH114462, T32GM007308, F31NS103398), the European Molecular Biology Organization, the National Science Foundation and the Simons Foundation.

Here’s link to and a citation for the paper,

Developmental diversification of cortical inhibitory interneurons by Christian Mayer, Christoph Hafemeister, Rachel C. Bandler, Robert Machold, Renata Batista Brito, Xavier Jaglin, Kathryn Allaway, Andrew Butler, Gord Fishell, & Rahul Satija. Nature volume 555, pages 457–462 (22 March 2018) doi:10.1038/nature25999 Published: 05 March 2018

This paper is behind a paywall.

Why don’t you CRISPR yourself?

It must have been quite the conference. Josiah Zayner plunged a needle into himself and claimed to have changed his DNA (deoxyribonucleic acid) while giving his talk. (*Segue: There is some Canadian content if you keep reading.*) From an Oct. 10, 2017 article by Adele Peters for Fast Company (Note: A link has been removed),

“What we’ve got here is some DNA, and this is a syringe,” Josiah Zayner tells a room full of synthetic biologists and other researchers. He fills the needle and plunges it into his skin. “This will modify my muscle genes and give me bigger muscles.”

Zayner, a biohacker–basically meaning he experiments with biology in a DIY lab rather than a traditional one–was giving a talk called “A Step-by-Step Guide to Genetically Modifying Yourself With CRISPR” at the SynBioBeta conference in San Francisco, where other presentations featured academics in suits and the young CEOs of typical biotech startups. Unlike the others, he started his workshop by handing out shots of scotch and a booklet explaining the basics of DIY [do-it-yourwelf] genome engineering.

If you want to genetically modify yourself, it turns out, it’s not necessarily complicated. As he offered samples in small baggies to the crowd, Zayner explained that it took him about five minutes to make the DNA that he brought to the presentation. The vial held Cas9, an enzyme that snips DNA at a particular location targeted by guide RNA, in the gene-editing system known as CRISPR. In this case, it was designed to knock out the myostatin gene, which produces a hormone that limits muscle growth and lets muscles atrophy. In a study in China, dogs with the edited gene had double the muscle mass of normal dogs. If anyone in the audience wanted to try it, they could take a vial home and inject it later. Even rubbing it on skin, Zayner said, would have some effect on cells, albeit limited.

Peters goes on to note that Zayner has a PhD in molecular biology and biophysics and worked for NASA (US National Aeronautics and Space Administration). Zayner’s Wikipedia entry fills in a few more details (Note: Links have been removed),

Zayner graduated from the University of Chicago with a Ph.D. in biophysics in 2013. He then spent two years as a researcher at NASA’s Ames Research Center,[2] where he worked on Martian colony habitat design. While at the agency, Zayner also analyzed speech patterns in online chat, Twitter, and books, and found that language on Twitter and online chat is closer to how people talk than to how they write.[3] Zayner found NASA’s scientific work less innovative than he expected, and upon leaving in January 2016, he launched a crowdfunding campaign to provide CRISPR kits to let the general public experiment with editing bacterial DNA. He also continued his grad school business, The ODIN, which sells kits to let the general public experiment at home. As of May 2016, The ODIN had four employees and operates out of Zayner’s garage.[2]

He refers to himself as a biohacker and believes in the importance in letting the general public participate in scientific experimentation, rather than leaving it segregated to labs.[2][4][1] Zayner found the biohacking community exclusive and hierarchical, particularly in the types of people who decide what is “safe”. He hopes that his projects can let even more people experiment in their homes. Other scientists responded that biohacking is inherently privileged, as it requires leisure time and money, and that deviance from the safety rules of concern would lead to even harsher regulations for all.[5] Zayner’s public CRISPR kit campaign coincided with wider scrutiny over genetic modification. Zayner maintained that these fears were based on misunderstandings of the product, as genetic experiments on yeast and bacteria cannot produce a viral epidemic.[6][7] In April 2015, Zayner ran a hoax on Craigslist to raise awareness about the future potential of forgery in forensics genetics testing.[8]

In February 2016, Zayner performed a full body microbiome transplant on himself, including a fecal transplant, to experiment with microbiome engineering and see if he could cure himself from gastrointestinal and other health issues. The microbiome from the donors feces successfully transplanted in Zayner’s gut according to DNA sequencing done on samples.[2] This experiment was documented by filmmakers Kate McLean and Mario Furloni and turned into the short documentary film Gut Hack.[9]

In December 2016, Zayner created a fluorescent beer by engineering yeast to contain the green fluorescent protein from jellyfish. Zayner’s company, The ODIN, released kits to allow people to create their own engineered fluorescent yeast and this was met with some controversy as the FDA declared the green fluorescent protein can be seen as a color additive.[10] Zayner, views the kit as a way that individual can use genetic engineering to create things in their everyday life.[11]

I found the video for Zayner’s now completed crowdfunding campaign,

I also found The ODIN website (mentioned in the Wikipedia essay) where they claim to be selling various gene editing and gene engineering kits including the CRISPR editing kits mentioned in Peters’ article,

In 2016, he [Zayner] sold $200,000 worth of products, including a kit for yeast that can be used to brew glowing bioluminescent beer, a kit to discover antibiotics at home, and a full home lab that’s roughly the cost of a MacBook Pro. In 2017, he expects to double sales. Many kits are simple, and most buyers probably aren’t using the supplies to attempt to engineer themselves (many kits go to classrooms). But Zayner also hopes that as people using the kits gain genetic literacy, they experiment in wilder ways.

Zayner sells a full home biohacking lab that’s roughly the cost of a MacBook Pro. [Photo: The ODIN]

He questions whether traditional research methods, like randomized controlled trials, are the only way to make discoveries, pointing out that in newer personalized medicine (such as immunotherapy for cancer, which is personalized for each patient), a sample size of one person makes sense. At his workshop, he argued that people should have the choice to self-experiment if they want to; we also change our DNA when we drink alcohol or smoke cigarettes or breathe in dirty city air. Other society-sanctioned activities are more dangerous. “We sacrifice maybe a million people a year to the car gods,” he said. “If you ask someone, ‘Would you get rid of cars?’–no.” …

US researchers both conventional and DIY types such as Zayner are not the only ones who are editing genes. The Chinese study mentioned in Peters’ article was written up in an Oct. 19, 2015 article by Antonio Regalado for the MIT [Massachusetts Institute of Technology] Technology Review (Note: Links have been removed),

Scientists in China say they are the first to use gene editing to produce customized dogs. They created a beagle with double the amount of muscle mass by deleting a gene called myostatin.

The dogs have “more muscles and are expected to have stronger running ability, which is good for hunting, police (military) applications,” Liangxue Lai, a researcher with the Key Laboratory of Regenerative Biology at the Guangzhou Institutes of Biomedicine and Health, said in an e-mail.

Lai and 28 colleagues reported their results last week in the Journal of Molecular Cell Biology, saying they intend to create dogs with other DNA mutations, including ones that mimic human diseases such as Parkinson’s and muscular dystrophy. “The goal of the research is to explore an approach to the generation of new disease dog models for biomedical research,” says Lai. “Dogs are very close to humans in terms of metabolic, physiological, and anatomical characteristics.”

Lai said his group had no plans breed to breed the extra-muscular beagles as pets. Other teams, however, could move quickly to commercialize gene-altered dogs, potentially editing their DNA to change their size, enhance their intelligence, or correct genetic illnesses. A different Chinese Institute, BGI, said in September it had begun selling miniature pigs, created via gene editing, for $1,600 each as novelty pets.

People have been influencing the genetics of dogs for millennia. By at least 36,000 years ago, early humans had already started to tame wolves and shape the companions we have today. Charles Darwin frequently cited dog breeding in The Origin of Species to demonstrate how evolution gradually occurs by a process of selection. With CRISPR, however, evolution is no longer gradual or subject to chance. It is immediate and under human control.

It is precisely that power that is stirring wide debate and concern over CRISPR. Yet at least some researchers think that gene-edited dogs could put a furry, friendly face on the technology. In an interview this month, George Church, a professor at Harvard University who leads a large effort to employ CRISPR editing, said he thinks it will be possible to augment dogs by using DNA edits to make them live longer or simply make them smarter.

Church said he also believed the alteration of dogs and other large animals could open a path to eventual gene editing of people. “Germline editing of pigs or dogs offers a line into it,” he said. “People might say, ‘Hey, it works.’ ”

In the meantime, Zayner’s ideas are certainly thought provoking. I’m not endorsing either his products or his ideas but it should be noted that early science pioneers such as Humphrey Davy and others experimented on themselves. For anyone unfamiliar with Davy, (from the Humphrey Davy Wikipedia entry; Note: Links have been removed),

Sir Humphry Davy, 1st Baronet PRS MRIA FGS (17 December 1778 – 29 May 1829) was a Cornish chemist and inventor,[1] who is best remembered today for isolating a series of substances for the first time: potassium and sodium in 1807 and calcium, strontium, barium, magnesium and boron the following year, as well as discovering the elemental nature of chlorine and iodine. He also studied the forces involved in these separations, inventing the new field of electrochemistry. Berzelius called Davy’s 1806 Bakerian Lecture On Some Chemical Agencies of Electricity[2] “one of the best memoirs which has ever enriched the theory of chemistry.”[3] He was a Baronet, President of the Royal Society (PRS), Member of the Royal Irish Academy (MRIA), and Fellow of the Geological Society (FGS). He also invented the Davy lamp and a very early form of incandescent light bulb.

Canadian content*

A Nov. 11, 2017 posting on the Canadian Broadcasting Corporation’s (CBC) Quirks and Quarks blog notes that self-experimentation has a long history and goes on to describe Zayner’s and others biohacking exploits before describing the legality of biohacking in Canada,

With biohackers entering into the space traditionally held by scientists and clinicians, it begs questions. Professor Timothy Caulfield, a Canada research chair in health, law and policy at the University of Alberta, says when he hears of somebody giving themselves biohacked gene therapy, he wonders: “Is this legal? Is this safe? And if it’s not safe, is there anything that we can do about regulating it? And to be honest with you that’s a tough question and I think it’s an open question.”

In Canada, Caulfield says, Health Canada focuses on products. “You have to have something that you are going to regulate or you have to have something that’s making health claims. So if there is a product that is saying I can cure X, Y, or Z, Health Canada can say, ‘Well let’s make sure the science really backs up that claim.’ The problem with these do-it-yourself approaches is there isn’t really a product. You know these people are experimenting on themselves with something that may or may not be designed for health purposes.”

According to Caufield, if you could buy a gene therapy kit that was being marketed to you to biohack yourself, that would be different. “Health Canada could jump in. But right here that’s not the case,” he says.

There are places in the world that do regulate biohacking, says Caulfield. “Germany, for example, they have specific laws for it. And here in Canada we do have a regulatory framework that says that you cannot do gene therapy that will alter the germ line. In other words, you can’t do gene therapy or any kind of genetic editing that will create a change that you will pass on to your offspring. So that would be illegal, but that’s not what’s happening here. And I don’t think there’s a regulatory framework that adequately captures it.”

Infectious disease and policy experts aren’t that concerned yet about the possibility of a biohacker unleashing a genetically modified super germ into the population.

“I think in the future that could be a problem,”says Caulfield, “but this isn’t something that would be easy to do in your garage. I think it’s complicated science. But having said that, the science is moving quickly. We need to think about how we are going to control the potential harms.”

You can find out more about the ‘wild’ people (mostly men) of early science in Richard Holmes’ 2008 book, The Age of Wonder: How the Romantic Generation Discovered the Beauty and Terror of Science.

Finally, should you be interested in connecting with synthetic biology enthusiasts, entrepreneurs, and others, SynBioBeta is more than a conference; it’s also an activity hub.

ETA January 25, 2018 (five minutes later): There are some CRISPR/CAS9 events taking place in Toronto, Canada on January 24 and 25, 2018. One is a workshop with Portuguese artist, Marta de Menezes, and the other is a panel discussion. See my January 10, 2018 posting for more details.

*’Segue: There is some Canadian content if you keep reading.’ and ‘Canadian content’ added January 25, 2018 six minutes after first publication.

ETA February 20, 2018: Sarah Zhang’s Feb. 20, 2018 article for The Atlantic revisits Josiah Zayner’s decision to inject himself with CRISPR,

When Josiah Zayner watched a biotech CEO drop his pants at a biohacking conference and inject himself with an untested herpes treatment, he realized things had gone off the rails.

Zayner is no stranger to stunts in biohacking—loosely defined as experiments, often on the self, that take place outside of traditional lab spaces. You might say he invented their latest incarnation: He’s sterilized his body to “transplant” his entire microbiome in front of a reporter. He’s squabbled with the FDA about selling a kit to make glow-in-the-dark beer. He’s extensively documented attempts to genetically engineer the color of his skin. And most notoriously, he injected his arm with DNA encoding for CRISPR that could theoretically enhance his muscles—in between taking swigs of Scotch at a live-streamed event during an October conference. (Experts say—and even Zayner himself in the live-stream conceded—it’s unlikely to work.)

So when Zayner saw Ascendance Biomedical’s CEO injecting himself on a live-stream earlier this month, you might say there was an uneasy flicker of recognition.

“Honestly, I kind of blame myself,” Zayner told me recently. He’s been in a soul-searching mood; he recently had a kid and the backlash to the CRISPR stunt in October [2017] had been getting to him. “There’s no doubt in my mind that somebody is going to end up hurt eventually,” he said.

Yup, it’s one of the reasons for rules; people take things too far. The trick is figuring out how to achieve balance between risk taking and recklessness.

A customized cruise experience with wearable technology (and decreased personal agency?)

The days when you went cruising to ‘get away from it all’ seem to have passed (if they ever really existed) with the introduction of wearable technology that will register your every preference and make life easier according to Cliff Kuang’s Oct. 19, 2017 article for Fast Company,

This month [October 2017], the 141,000-ton Regal Princess will push out to sea after a nine-figure revamp of mind-boggling scale. Passengers won’t be greeted by new restaurants, swimming pools, or onboard activities, but will instead step into a future augured by the likes of Netflix and Uber, where nearly everything is on demand and personally tailored. An ambitious new customization platform has been woven into the ship’s 19 passenger decks: some 7,000 onboard sensors and 4,000 “guest portals” (door-access panels and touch-screen TVs), all of them connected by 75 miles of internal cabling. As the Carnival-owned ship cruises to Nassau, Bahamas, and Grand Turk, its 3,500 passengers will have the option of carrying a quarter-size device, called the Ocean Medallion, which can be slipped into a pocket or worn on the wrist and is synced with a companion app.

The platform will provide a new level of service for passengers; the onboard sensors record their tastes and respond to their movements, and the app guides them around the ship and toward activities aligned with their preferences. Carnival plans to roll out the platform to another seven ships by January 2019. Eventually, the Ocean Medallion could be opening doors, ordering drinks, and scheduling activities for passengers on all 102 of Carnival’s vessels across 10 cruise lines, from the mass-market Princess ships to the legendary ocean liners of Cunard.

Kuang goes on to explain the reasoning behind this innovation,

The Ocean Medallion is Carnival’s attempt to address a problem that’s become increasingly vexing to the $35.5 billion cruise industry. Driven by economics, ships have exploded in size: In 1996, Carnival Destiny was the world’s largest cruise ship, carrying 2,600 passengers. Today, Royal Caribbean’s MS Harmony of the Seas carries up to 6,780 passengers and 2,300 crew. Larger ships expend less fuel per passenger; the money saved can then go to adding more amenities—which, in turn, are geared to attracting as many types of people as possible. Today on a typical ship you can do practically anything—from attending violin concertos to bungee jumping. And that’s just onboard. Most of a cruise is spent in port, where each day there are dozens of experiences available. This avalanche of choice can bury a passenger. It has also made personalized service harder to deliver. …

Kuang also wrote this brief description of how the technology works from the passenger’s perspective in an Oct. 19, 2017 item for Fast Company,

1. Pre-trip

On the web or on the app, you can book experiences, log your tastes and interests, and line up your days. That data powers the recommendations you’ll see. The Ocean Medallion arrives by mail and becomes the key to ship access.

2. Stateroom

When you draw near, your cabin-room door unlocks without swiping. The room’s unique 43-inch TV, which doubles as a touch screen, offers a range of Carnival’s bespoke travel shows. Whatever you watch is fed into your excursion suggestions.

3. Food

When you order something, sensors detect where you are, allowing your server to find you. Your allergies and preferences are also tracked, and shape the choices you’re offered. In all, the back-end data has 45,000 allergens tagged and manages 250,000 drink combinations.

4. Activities

The right algorithms can go beyond suggesting wines based on previous orders. Carnival is creating a massive semantic database, so if you like pricey reds, you’re more apt to be guided to a violin concerto than a limbo competition. Your onboard choices—the casino, the gym, the pool—inform your excursion recommendations.

In Kuang’s Oct. 19, 2017 article he notes that the cruise ship line is putting a lot of effort into retraining their staff and emphasizing the ‘soft’ skills that aren’t going to be found in this iteration of the technology. No mention is made of whether or not there will be reductions in the number of staff members on this cruise ship nor is the possibility that ‘soft’ skills may in the future be incorporated into this technological marvel.

Personalization/customization is increasingly everywhere

How do you feel about customized news feeds? As it turns out, this is not a rhetorical question as Adrienne LaFrance notes in her Oct. 19, 2017 article for The Atlantic (Note: Links have been removed),

Today, a Google search for news runs through the same algorithmic filtration system as any other Google search: A person’s individual search history, geographic location, and other demographic information affects what Google shows you. Exactly how your search results differ from any other person’s is a mystery, however. Not even the computer scientists who developed the algorithm could precisely reverse engineer it, given the fact that the same result can be achieved through numerous paths, and that ranking factors—deciding which results show up first—are constantly changing, as are the algorithms themselves.

We now get our news in real time, on demand, tailored to our interests, across multiple platforms, without knowing just how much is actually personalized. It was technology companies like Google and Facebook, not traditional newsrooms, that made it so. But news organizations are increasingly betting that offering personalized content can help them draw audiences to their sites—and keep them coming back.

Personalization extends beyond how and where news organizations meet their readers. Already, smartphone users can subscribe to push notifications for the specific coverage areas that interest them. On Facebook, users can decide—to some extent—which organizations’ stories they would like to appear in their news feeds. At the same time, devices and platforms that use machine learning to get to know their users will increasingly play a role in shaping ultra-personalized news products. Meanwhile, voice-activated artificially intelligent devices, such as Google Home and Amazon Echo, are poised to redefine the relationship between news consumers and the news [emphasis mine].

While news personalization can help people manage information overload by making individuals’ news diets unique, it also threatens to incite filter bubbles and, in turn, bias [emphasis mine]. This “creates a bit of an echo chamber,” says Judith Donath, author of The Social Machine: Designs for Living Online and a researcher affiliated with Harvard University ’s Berkman Klein Center for Internet and Society. “You get news that is designed to be palatable to you. It feeds into people’s appetite of expecting the news to be entertaining … [and] the desire to have news that’s reinforcing your beliefs, as opposed to teaching you about what’s happening in the world and helping you predict the future better.”

Still, algorithms have a place in responsible journalism. “An algorithm actually is the modern editorial tool,” says Tamar Charney, the managing editor of NPR One, the organization’s customizable mobile-listening app. A handcrafted hub for audio content from both local and national programs as well as podcasts from sources other than NPR, NPR One employs an algorithm to help populate users’ streams with content that is likely to interest them. But Charney assures there’s still a human hand involved: “The whole editorial vision of NPR One was to take the best of what humans do and take the best of what algorithms do and marry them together.” [emphasis mine]

The skimming and diving Charney describes sounds almost exactly like how Apple and Google approach their distributed-content platforms. With Apple News, users can decide which outlets and topics they are most interested in seeing, with Siri offering suggestions as the algorithm gets better at understanding your preferences. Siri now has have help from Safari. The personal assistant can now detect browser history and suggest news items based on what someone’s been looking at—for example, if someone is searching Safari for Reykjavík-related travel information, they will then see Iceland-related news on Apple News. But the For You view of Apple News isn’t 100 percent customizable, as it still spotlights top stories of the day, and trending stories that are popular with other users, alongside those curated just for you.

Similarly, with Google’s latest update to Google News, readers can scan fixed headlines, customize sidebars on the page to their core interests and location—and, of course, search. The latest redesign of Google News makes it look newsier than ever, and adds to many of the personalization features Google first introduced in 2010. There’s also a place where you can preprogram your own interests into the algorithm.

Google says this isn’t an attempt to supplant news organizations, nor is it inspired by them. The design is rather an embodiment of Google’s original ethos, the product manager for Google News Anand Paka says: “Just due to the deluge of information, users do want ways to control information overload. In other words, why should I read the news that I don’t care about?” [emphasis mine]

Meanwhile, in May [2017?], Google briefly tested a personalized search filter that would dip into its trove of data about users with personal Google and Gmail accounts and include results exclusively from their emails, photos, calendar items, and other personal data related to their query. [emphasis mine] The “personal” tab was supposedly “just an experiment,” a Google spokesperson said, and the option was temporarily removed, but seems to have rolled back out for many users as of August [2017?].

Now, Google, in seeking to settle a class-action lawsuit alleging that scanning emails to offer targeted ads amounts to illegal wiretapping, is promising that for the next three years it won’t use the content of its users’ emails to serve up targeted ads in Gmail. The move, which will go into effect at an unspecified date, doesn’t mean users won’t see ads, however. Google will continue to collect data from users’ search histories, YouTube, and Chrome browsing habits, and other activity.

The fear that personalization will encourage filter bubbles by narrowing the selection of stories is a valid one, especially considering that the average internet user or news consumer might not even be aware of such efforts. Elia Powers, an assistant professor of journalism and news media at Towson University in Maryland, studied the awareness of news personalization among students after he noticed those in his own classes didn’t seem to realize the extent to which Facebook and Google customized users’ results. “My sense is that they didn’t really understand … the role that people that were curating the algorithms [had], how influential that was. And they also didn’t understand that they could play a pretty active role on Facebook in telling Facebook what kinds of news they want them to show and how to prioritize [content] on Google,” he says.

The results of Powers’s study, which was published in Digital Journalism in February [2017], showed that the majority of students had no idea that algorithms were filtering the news content they saw on Facebook and Google. When asked if Facebook shows every news item, posted by organizations or people, in a users’ newsfeed, only 24 percent of those surveyed were aware that Facebook prioritizes certain posts and hides others. Similarly, only a quarter of respondents said Google search results would be different for two different people entering the same search terms at the same time. [emphasis mine; Note: Respondents in this study were students.]

This, of course, has implications beyond the classroom, says Powers: “People as news consumers need to be aware of what decisions are being made [for them], before they even open their news sites, by algorithms and the people behind them, and also be able to understand how they can counter the effects or maybe even turn off personalization or make tweaks to their feeds or their news sites so they take a more active role in actually seeing what they want to see in their feeds.”

On Google and Facebook, the algorithm that determines what you see is invisible. With voice-activated assistants, the algorithm suddenly has a persona. “We are being trained to have a relationship with the AI,” says Amy Webb, founder of the Future Today Institute and an adjunct professor at New York University Stern School of Business. “This is so much more catastrophically horrible for news organizations than the internet. At least with the internet, I have options. The voice ecosystem is not built that way. It’s being built so I just get the information I need in a pleasing way.”

LaFrance’s article is thoughtful and well worth reading in its entirety. Now, onto some commentary.

Loss of personal agency

I have been concerned for some time about the increasingly dull results I get from a Google search and while I realize the company has been gathering information about me via my searches , supposedly in service of giving me better searches, I had no idea how deeply the company can mine for personal data. It makes me wonder what would happen if Google and Facebook attempted a merger.

More cogently, I rather resent the search engines and artificial intelligence agents (e.g. Facebook bots) which have usurped my role as the arbiter of what interests me, in short, my increasing loss of personal agency.

I’m also deeply suspicious of what these companies are going to do with my data. Will it be used to manipulate me in some way? Presumably, the data will be sold and used for some purpose. In the US, they have married electoral data with consumer data as Brent Bambury notes in an Oct. 13, 2017 article for his CBC (Canadian Broadcasting Corporation) Radio show,

How much of your personal information circulates in the free-market ether of metadata? It could be more than you imagine, and it might be enough to let others change the way you vote.

A data firm that specializes in creating psychological profiles of voters claims to have up to 5,000 data points on 220 million Americans. Cambridge Analytica has deep ties to the American right and was hired by the campaigns of Ben Carson, Ted Cruz and Donald Trump.

During the U.S. election, CNN called them “Donald Trump’s mind readers” and his secret weapon.

David Carroll is a Professor at the Parsons School of Design in New York City. He is one of the millions of Americans profiled by Cambridge Analytica and he’s taking legal action to find out where the company gets its masses of data and how they use it to create their vaunted psychographic profiles of voters.

On Day 6 [Banbury’s CBC radio programme], he explained why that’s important.

“They claim to have figured out how to project our voting behavior based on our consumer behavior. So it’s important for citizens to be able to understand this because it would affect our ability to understand how we’re being targeted by campaigns and how the messages that we’re seeing on Facebook and television are being directed at us to manipulate us.” [emphasis mine]

The parent company of Cambridge Analytica, SCL Group, is a U.K.-based data operation with global ties to military and political activities. David Carroll says the potential for sharing personal data internationally is a cause for concern.

“It’s the first time that this kind of data is being collected and transferred across geographic boundaries,” he says.

But that also gives Carroll an opening for legal action. An individual has more rights to access their personal information in the U.K., so that’s where he’s launching his lawsuit.

Reports link Michael Flynn, briefly Trump’s National Security Adviser, to SCL Group and indicate that former White House strategist Steve Bannon is a board member of Cambridge Analytica. Billionaire Robert Mercer, who has underwritten Bannon’s Breitbart operations and is a major Trump donor, also has a significant stake in Cambridge Analytica.

In the world of data, Mercer’s credentials are impeccable.

“He is an important contributor to the field of artificial intelligence,” says David Carroll.

“His work at IBM is seminal and really important in terms of the foundational ideas that go into big data analytics, so the relationship between AI and big data analytics. …

Banbury’s piece offers a lot more, including embedded videos, than I’ve not included in that excerpt but I also wanted to include some material from Carole Cadwalladr’s Oct. 1, 2017 Guardian article about Carroll and his legal fight in the UK,

“There are so many disturbing aspects to this. One of the things that really troubles me is how the company can buy anonymous data completely legally from all these different sources, but as soon as it attaches it to voter files, you are re-identified. It means that every privacy policy we have ignored in our use of technology is a broken promise. It would be one thing if this information stayed in the US, if it was an American company and it only did voter data stuff.”

But, he [Carroll] argues, “it’s not just a US company and it’s not just a civilian company”. Instead, he says, it has ties with the military through SCL – “and it doesn’t just do voter targeting”. Carroll has provided information to the Senate intelligence committee and believes that the disclosures mandated by a British court could provide evidence helpful to investigators.

Frank Pasquale, a law professor at the University of Maryland, author of The Black Box Society and a leading expert on big data and the law, called the case a “watershed moment”.

“It really is a David and Goliath fight and I think it will be the model for other citizens’ actions against other big corporations. I think we will look back and see it as a really significant case in terms of the future of algorithmic accountability and data protection. …

Nobody is discussing personal agency directly but if you’re only being exposed to certain kinds of messages then your personal agency has been taken from you. Admittedly we don’t have complete personal agency in our lives but AI along with the data gathering done online and increasingly with wearable and smart technology means that another layer of control has been added to your life and it is largely invisible. After all, the students in Elia Powers’ study didn’t realize their news feeds were being pre-curated.

Of musical parodies, Despacito, and evolution

What great timing, I just found out about a musical science parody featuring evolution and biology and learned of the latest news about the study of evolution on one of the islands in the Galapagos (where Charles Darwin made some of his observations). Thanks to Stacey Johnson for her November 24, 2017 posting on the Signals blog for featuring Evo-Devo (Despacito Biology Parody), an A Capella Science music video from Tim Blais,

Now, for the latest regarding the Galapagos and evolution (from a November 24, 2017 news item on ScienceDaily),

The arrival 36 years ago of a strange bird to a remote island in the Galapagos archipelago has provided direct genetic evidence of a novel way in which new species arise.

In this week’s issue of the journal Science, researchers from Princeton University and Uppsala University in Sweden report that the newcomer belonging to one species mated with a member of another species resident on the island, giving rise to a new species that today consists of roughly 30 individuals.

The study comes from work conducted on Darwin’s finches, which live on the Galapagos Islands in the Pacific Ocean. The remote location has enabled researchers to study the evolution of biodiversity due to natural selection.

The direct observation of the origin of this new species occurred during field work carried out over the last four decades by B. Rosemary and Peter Grant, two scientists from Princeton, on the small island of Daphne Major.

A November 23, 2017 Princeton University news release on EurekAlert, which originated the news item, provides more detail,

“The novelty of this study is that we can follow the emergence of new species in the wild,” said B. Rosemary Grant, a senior research biologist, emeritus, and a senior biologist in the Department of Ecology and Evolutionary Biology. “Through our work on Daphne Major, we were able to observe the pairing up of two birds from different species and then follow what happened to see how speciation occurred.”

In 1981, a graduate student working with the Grants on Daphne Major noticed the newcomer, a male that sang an unusual song and was much larger in body and beak size than the three resident species of birds on the island.

“We didn’t see him fly in from over the sea, but we noticed him shortly after he arrived. He was so different from the other birds that we knew he did not hatch from an egg on Daphne Major,” said Peter Grant, the Class of 1877 Professor of Zoology, Emeritus, and a professor of ecology and evolutionary biology, emeritus.

The researchers took a blood sample and released the bird, which later bred with a resident medium ground finch of the species Geospiz fortis, initiating a new lineage. The Grants and their research team followed the new “Big Bird lineage” for six generations, taking blood samples for use in genetic analysis.

In the current study, researchers from Uppsala University analyzed DNA collected from the parent birds and their offspring over the years. The investigators discovered that the original male parent was a large cactus finch of the species Geospiza conirostris from Española island, which is more than 100 kilometers (about 62 miles) to the southeast in the archipelago.

The remarkable distance meant that the male finch was not able to return home to mate with a member of his own species and so chose a mate from among the three species already on Daphne Major. This reproductive isolation is considered a critical step in the development of a new species when two separate species interbreed.

The offspring were also reproductively isolated because their song, which is used to attract mates, was unusual and failed to attract females from the resident species. The offspring also differed from the resident species in beak size and shape, which is a major cue for mate choice. As a result, the offspring mated with members of their own lineage, strengthening the development of the new species.

Researchers previously assumed that the formation of a new species takes a very long time, but in the Big Bird lineage it happened in just two generations, according to observations made by the Grants in the field in combination with the genetic studies.

All 18 species of Darwin’s finches derived from a single ancestral species that colonized the Galápagos about one to two million years ago. The finches have since diversified into different species, and changes in beak shape and size have allowed different species to utilize different food sources on the Galápagos. A critical requirement for speciation to occur through hybridization of two distinct species is that the new lineage must be ecologically competitive — that is, good at competing for food and other resources with the other species — and this has been the case for the Big Bird lineage.

“It is very striking that when we compare the size and shape of the Big Bird beaks with the beak morphologies of the other three species inhabiting Daphne Major, the Big Birds occupy their own niche in the beak morphology space,” said Sangeet Lamichhaney, a postdoctoral fellow at Harvard University and the first author on the study. “Thus, the combination of gene variants contributed from the two interbreeding species in combination with natural selection led to the evolution of a beak morphology that was competitive and unique.”

The definition of a species has traditionally included the inability to produce fully fertile progeny from interbreeding species, as is the case for the horse and the donkey, for example. However, in recent years it has become clear that some closely related species, which normally avoid breeding with each other, do indeed produce offspring that can pass genes to subsequent generations. The authors of the study have previously reported that there has been a considerable amount of gene flow among species of Darwin’s finches over the last several thousands of years.

One of the most striking aspects of this study is that hybridization between two distinct species led to the development of a new lineage that after only two generations behaved as any other species of Darwin’s finches, explained Leif Andersson, a professor at Uppsala University who is also affiliated with the Swedish University of Agricultural Sciences and Texas A&M University. “A naturalist who came to Daphne Major without knowing that this lineage arose very recently would have recognized this lineage as one of the four species on the island. This clearly demonstrates the value of long-running field studies,” he said.

It is likely that new lineages like the Big Birds have originated many times during the evolution of Darwin’s finches, according to the authors. The majority of these lineages have gone extinct but some may have led to the evolution of contemporary species. “We have no indication about the long-term survival of the Big Bird lineage, but it has the potential to become a success, and it provides a beautiful example of one way in which speciation occurs,” said Andersson. “Charles Darwin would have been excited to read this paper.”

Here’s a link to and a citation for the paper,

Rapid hybrid speciation in Darwin’s finches by Sangeet Lamichhaney, Fan Han, Matthew T. Webster, Leif Andersson, B. Rosemary Grant, Peter R. Grant. Science 23 Nov 2017: eaao4593 DOI: 10.1126/science.aao4593

This paper is behind a paywall.

Happy weekend! And for those who love their Despacito, there’s this parody featuring three Italians in a small car (thanks again to Stacey Johnson’s blog posting),