Category Archives: Mathematics

“Innovation and its enemies” and “Science in Wonderland”: a commentary on two books and a few thoughts about fish (1 of 2)

There’s more than one way to approach the introduction of emerging technologies and sciences to ‘the public’. Calestous Juma in his 2016 book, ”Innovation and Its Enemies; Why People Resist New Technologies” takes a direct approach, as can be seen from the title while Melanie Keene’s 2015 book, “Science in Wonderland; The Scientific Fairy Tales of Victorian Britain” presents a more fantastical one. The fish in the headline tie together, thematically and tenuously, both books with a real life situation.

Innovation and Its Enemies

Calestous Juma, the author of “Innovation and Its Enemies” has impressive credentials,

  • Professor of the Practice of International Development,
  • Director of the Science, Technology, and Globalization Project at Harvard Kennedy School’s Better Science and International Affairs,
  • Founding Director of the African Centre for Technology Studies in Nairobi (Kenya),
  • Fellow of the Royal Society of London, and
  • Foreign Associate of the US National Academy of Sciences.

Even better, Juma is an excellent storyteller perhaps too much so for a book which presents a series of science and technology adoption case histories. (Given the range of historical time periods, geography, and the innovations themselves, he always has to stop short.)  The breadth is breathtaking and Juma manages with aplomb. For example, the innovations covered include: coffee, electricity, mechanical refrigeration, margarine, recorded sound, farm mechanization, and the printing press. He also covers two recently emerging technologies/innovations: transgenic crops and AquAdvantage salmon (more about the salmon later).

Juma provides an analysis of the various ways in which the public and institutions panic over innovation and goes on to offer solutions. He also injects a subtle note of humour from time to time. Here’s how Juma describes various countries’ response to risks and benefits,

In the United States products are safe until proven risky.

In France products are risky until proven safe.

In the United Kingdom products are risky even when proven safe.

In India products are safe when proven risky.

In Canada products are neither safe nor risky.

In Japan products are either safe or risky.

In Brazil products are both safe and risky.

In sub-Saharan Africa products are risky even if they do not exist. (pp. 4-5)

To Calestous Juma, thank you for mentioning Canada and for so aptly describing the quintessentially Canadian approach to not just products and innovation but to life itself, ‘we just don’t know; it could be this or it could be that or it could be something entirely different; we just don’t know and probably will never know.’.

One of the aspects that I most appreciated in this book was the broadening of the geographical perspective on innovation and emerging technologies to include the Middle East, China, and other regions/countries. As I’ve  noted in past postings, much of the discussion here in Canada is Eurocentric and/or UScentric. For example, the Council of Canadian Academies which conducts assessments of various science questions at the request of Canadian and regional governments routinely fills the ‘international’ slot(s) for their expert panels with academics from Europe (mostly Great Britain) and/or the US (or sometimes from Australia and/or New Zealand).

A good example of Juma’s expanded perspective on emerging technology is offered in Art Carden’s July 7, 2017 book review for (Note: A link has been removed),

In the chapter on coffee, Juma discusses how Middle Eastern and European societies resisted the beverage and, in particular, worked to shut down coffeehouses. Islamic jurists debated whether the kick from coffee is the same as intoxication and therefore something to be prohibited. Appealing to “the principle of original permissibility — al-ibaha, al-asliya — under which products were considered acceptable until expressly outlawed,” the fifteenth-century jurist Muhamad al-Dhabani issued several fatwas in support of keeping coffee legal.

This wasn’t the last word on coffee, which was banned and permitted and banned and permitted and banned and permitted in various places over time. Some rulers were skeptical of coffee because it was brewed and consumed in public coffeehouses — places where people could indulge in vices like gambling and tobacco use or perhaps exchange unorthodox ideas that were a threat to their power. It seems absurd in retrospect, but political control of all things coffee is no laughing matter.

The bans extended to Europe, where coffee threatened beverages like tea, wine, and beer. Predictably, and all in the name of public safety (of course!), European governments with the counsel of experts like brewers, vintners, and the British East India Tea Company regulated coffee importation and consumption. The list of affected interest groups is long, as is the list of meddlesome governments. Charles II of England would issue A Proclamation for the Suppression of Coffee Houses in 1675. Sweden prohibited coffee imports on five separate occasions between 1756 and 1817. In the late seventeenth century, France required that all coffee be imported through Marseilles so that it could be more easily monopolized and taxed.

Carden who teaches economics at Stanford University (California, US) focuses on issues of individual liberty and the rule of law with regards to innovation. I can appreciate the need to focus tightly when you have a limited word count but Carden could have a spared a few words to do more justice to Juma’s comprehensive and focused work.

At the risk of being accused of the fault I’ve attributed to Carden, I must mention the printing press chapter. While it was good to see a history of the printing press and attendant social upheavals noting its impact and discovery in regions other than Europe; it was shocking to someone educated in Canada to find Marshall McLuhan entirely ignored. Even now, I believe it’s virtually impossible to discuss the printing press as a technology, in Canada anyway, without mentioning our ‘communications god’ Marshall McLuhan and his 1962 book, The Gutenberg Galaxy.

Getting back to Juma’s book, his breadth and depth of knowledge, history, and geography is packaged in a relatively succinct 316 pp. As a writer, I admire his ability to distill the salient points and to devote chapters on two emerging technologies. It’s notoriously difficult to write about a currently emerging technology and Juma even managed to include a reference published only months (in early 2016) before “Innovation and its enemires” was published in July 2016.

Irrespective of Marshall McLuhan, I feel there are a few flaws. The book is intended for policy makers and industry (lobbyists, anyone?), he reaffirms (in academia, industry, government) a tendency toward a top-down approach to eliminating resistance. From Juma’s perspective, there needs to be better science education because no one who is properly informed should have any objections to an emerging/new technology. Juma never considers the possibility that resistance to a new technology might be a reasonable response. As well, while there was some mention of corporate resistance to new technologies which might threaten profits and revenue, Juma didn’t spare any comments about how corporate sovereignty and/or intellectual property issues are used to stifle innovation and quite successfully, by the way.

My concerns aside, testimony to the book’s worth is Carden’s review almost a year after publication. As well, Sir Peter Gluckman, Chief Science Advisor to the federal government of New Zealand, mentions Juma’s book in his January 16, 2017 talk, Science Advice in a Troubled World, for the Canadian Science Policy Centre.

Science in Wonderland

Melanie Keene’s 2015 book, “Science in Wonderland; The scientific fairy tales of Victorian Britain” provides an overview of the fashion for writing and reading scientific and mathematical fairy tales and, inadvertently, provides an overview of a public education programme,

A fairy queen (Victoria) sat on the throne of Victoria’s Britain, and she presided over a fairy tale age. The nineteenth century witnessed an unprecedented interest in fairies and in their tales, as they were used as an enchanted mirror in which to reflection question, and distort contemporary society.30  …  Fairies could be found disporting themselves thought the century on stage and page, in picture and print, from local haunts to global transports. There were myriad ways in which authors, painters, illustrators, advertisers, pantomime performers, singers, and more, capture this contemporary enthusiasm and engaged with fairyland and folklore; books, exhibitions, and images for children were one of the most significant. (p. 13)

… Anthropologists even made fairies the subject of scientific analysis, as ‘fairyology’ determined whether fairies should be part of natural history or part of supernatural lore; just on aspect of the revival of interest in folklore. Was there a tribe of fairy creatures somewhere out thee waiting to be discovered, across the globe of in the fossil record? Were fairies some kind of folks memory of any extinct race? (p. 14)

Scientific engagements with fairyland was widespread, and not just as an attractive means of packaging new facts for Victorian children.42 … The fairy tales of science had an important role to play in conceiving of new scientific disciplines; in celebrating new discoveries; in criticizing lofty ambitions; in inculcating habits of mind and body; in inspiring wonder; in positing future directions; and in the consideration of what the sciences were, and should be. A close reading of these tales provides a more sophisticated understanding of the content and status of the Victorian sciences; they give insights into what these new scientific disciplines were trying to do; how they were trying to cement a certain place in the world; and how they hoped to recruit and train new participants. (p. 18)

Segue: Should you be inclined to believe that society has moved on from fairies; it is possible to become a certified fairyologist (check out the website).

“Science in Wonderland,” the title being a reference to Lewis Carroll’s Alice, was marketed quite differently than “innovation and its enemies”. There is no description of the author, as is the protocol in academic tomes, so here’s more from her webpage on the University of Cambridge (Homerton College) website,

Fellow, Graduate Tutor, Director of Studies for History and Philosophy of Science

Getting back to Keene’s book, she makes the point that the fairy tales were based on science and integrated scientific terminology in imaginative ways although some books with more success than other others. Topics ranged from paleontology, botany, and astronomy to microscopy and more.

This book provides a contrast to Juma’s direct focus on policy makers with its overview of the fairy narratives. Keene is primarily interested in children but her book casts a wider net  “… they give insights into what these new scientific disciplines were trying to do; how they were trying to cement a certain place in the world; and how they hoped to recruit and train new participants.”

In a sense both authors are describing how technologies are introduced and integrated into society. Keene provides a view that must seem almost halcyon for many contemporary innovation enthusiasts. As her topic area is children’s literature any resistance she notes is primarily literary invoking a debate about whether or not science was killing imagination and whimsy.

It would probably help if you’d taken a course in children’s literature of the 19th century before reading Keene’s book is written . Even if you haven’t taken a course, it’s still quite accessible, although I was left wondering about ‘Alice in Wonderland’ and its relationship to mathematics (see Melanie Bayley’s December 16, 2009 story for the New Scientist for a detailed rundown).

As an added bonus, fairy tale illustrations are included throughout the book along with a section of higher quality reproductions.

One of the unexpected delights of Keene’s book was the section on L. Frank Baum and his electricity fairy tale, “The Master Key.” She stretches to include “The Wizard of Oz,” which doesn’t really fit but I can’t see how she could avoid mentioning Baum’s most famous creation. There’s also a surprising (to me) focus on water, which when it’s paired with the interest in microscopy makes sense. Keene isn’t the only one who has to stretch to make things fit into her narrative and so from water I move onto fish bringing me back to one of Juma’s emerging technologies

Part 2: Fish and final comments

Multi-level thinking in science—the art of seeing systems

I’ve quickly read Michael Edgeworth McIntyre’s paper on multi-level thinking and find it provides fascinating insight and some good writing style (I’ve provided a few excerpts from the paper further down in the posting).

Here’s more about the paper from an Aug. 17, 2017 Institute of Atmospheric Physics, Chinese Academy of Sciences press release on EurekAlert,

An unusual paper “On multi-level thinking and scientific understanding” appears in the October issue of Advances in Atmospheric Sciences. The author is Professor Michael Edgeworth McIntyre from University of Cambridge, whose work in atmospheric dynamics is well known. He has also had longstanding interests in astrophysics, music, perception psychology, and biological evolution.

The paper touches on a range of deep questions within and outside the atmospheric sciences. They include insights into the nature of science itself, and of scientific understanding — what it means to understand a scientific problem in depth — and into the communication skills necessary to convey that understanding and to mediate collaboration across specialist disciplines.

The paper appears in a Special Issue arising from last year’s Symposium held in Nanjing to commemorate the life of Professor Duzheng YE, who was well known as a national and international scientific leader and for his own wide range of interests, within and outside the atmospheric sciences. The symposium was organized by the Institute of Atmospheric Physics (IAP), Chinese Academy of Sciences, where Prof. YE had worked nearly 70 years before he passed away. Upon the invitation of Prof. Jiang ZHU, the Director General of IAP, also the Editor-in-Chief of Advances in Atmospheric Sciences (AAS), Prof. McIntyre agreed to contribute a review paper to an AAS special issue commemorating the centenary of Duzheng YE’s birth. Prof. YE was also the founding Editor-in-Chief of this journal.

One of Professor McIntyre’s themes is that we all have unconscious mathematics, including Euclidean geometry and the calculus of variations. This is easy to demonstrate and is key to understanding not only how science works but also, for instance, how music works. Indeed, it reveals some of the deepest connections between music and mathematics, going beyond the usual remarks about number-patterns. All this revolves around the biological significance of what Professor McIntyre calls the “organic-change principle”.

Further themes include the scientific value of looking at a problem from more than one viewpoint, and the need to use more than one level of description. Many scientific and philosophical controversies stem from confusing one level of description with another, for instance applying arguments to one level that belong on another. This confusion can be especially troublesome when it comes to questions about human biology and human nature, and about what Professor YE called multi-level “orderly human activities”.

Related to all these points are the contrasting modes of perception and understanding offered by the brain’s left and right hemispheres. Our knowledge of their functioning has progressed far beyond the narrow clichés of popular culture, thanks to recent work in the neurosciences. The two hemispheres automatically give us different levels of description, and complementary views of a problem. Good science takes advantage of this. When the two hemispheres cooperate, with each playing to its own strengths, our problem-solving is at its most powerful.

The paper ends with three examples of unconscious assumptions that have impeded scientific progress in the past. Two of them are taken from Professor McIntyre’s main areas of research. A third is from biology.

Here’s a link to and a citation for the paper,

On multi-level thinking and scientific understanding by Michael Edgeworth McIntyre. Advances in Atmospheric Sciences October 2017, Volume 34, Issue 10, pp 1150–1158 DOI:

This paper is open access.

To give you a sense of his writing and imagination, I’ve excerpted a few paragraphs from p. 1153 but first you need to see this .gif (he provides a number of ways to watch the .gif in his text but I think it’s easier to watch the copy of the one he has on his website),

Now for the excerpt,

Here is an example to show what I mean. It is a classic in experimental psychology, from the work of Professor Gunnar JOHANSSON in the 1970s. …

As soon as the twelve dots start moving, everyone with normal vision sees a person walking. This immediately illustrates several things. First, it illustrates that we all make unconscious assumptions. Here, we unconsciously assume a particular kind of three-dimensional motion. In this case the unconscious assumption is completely involuntary. We cannot help seeing a person walking, despite knowing that it is only twelve moving dots.

The animation also shows that we have unconscious mathematics, Euclidean geometry in this case. In order to generate the percept of a person walking, your brain has to fit a mathematical model to the incoming visual data, in this case a mathematical model based on Euclidean geometry. (And the model-fitting process is an active, and highly complex, predictive process most of which is inaccessible to conscious introspection.)

This brings me to the most central point in our discussion. Science does essentially the same thing. It fits models to data. So science is, in the most fundamental possible sense, an extension of ordinary perception. That is a simple way of saying what was said many decades ago by great thinkers such as Professor Sir Karl POPPER….

I love that phase “unconscious mathematics” for the way it includes even those of us who would never dream of thinking we had any kind of mathematics. I encourage you to read his paper in its entirety, which does include a little technical language in a few spots but the overall thesis is clear and easily understood.

Bubble physics could explain language patterns

According to University of Portsmouth physicist, James Burriidge, determining how linguistic dialects form is a question for physics and mathematics.  Here’s more about Burridge and his latest work on the topic from a July 24, 2017 University of Portsmouth press release (also on EurekAlert),

Language patterns could be predicted by simple laws of physics, a new study has found.

Dr James Burridge from the University of Portsmouth has published a theory using ideas from physics to predict where and how dialects occur.

He said: “If you want to know where you’ll find dialects and why, a lot can be predicted from the physics of bubbles and our tendency to copy others around us.

“Copying causes large dialect regions where one way of speaking dominates. Where dialect regions meet, you get surface tension. Surface tension causes oil and water to separate out into layers, and also causes small bubbles in a bubble bath to merge into bigger ones.

“The bubbles in the bath are like groups of people – they merge into the bigger bubbles because they want to fit in with their neighbours.

“When people speak and listen to each other, they have a tendency to conform to the patterns of speech they hear others using, and therefore align their dialects. Since people typically remain geographically local in their everyday lives, they tend to align with those nearby.”

Dr Burridge from the University’s department of mathematics departs from the existing approaches in studying dialects to formulate a theory of how country shape and population distribution play an important role in how dialect regions evolve.

Traditional dialectologists use the term ‘isogloss’ to describe a line on a map marking an area which has a distinct linguistic feature.

Dr Burridge said: “These isoglosses are like the edges of bubbles – the maths used to describe bubbles can also describe dialects.

“My model shows that dialects tend to move outwards from population centres, which explains why cities have their own dialects. Big cities like London and Birmingham are pushing on the walls of their own bubbles.

“This is why many dialects have a big city at their heart – the bigger the city, the greater this effect. It’s also why new ways of speaking often spread outwards from a large urban centre.

“If people live near a town or city, we assume they experience more frequent interactions with people from the city than with those living outside it, simply because there are more city dwellers to interact with.

His model also shows that language boundaries get smoother and straighter over time, which stabilises dialects.

Dr Burridge’s research is driven by a long-held interest in spatial patterns and the idea that humans and animal behaviour can evolve predictably. His research has been funded by the Leverhulme Trust.

Here’s an image illustrating language distribution in the UK<

Caption: These maps show a simulation of three language variants that are initially distributed throughout Great Britain in a random pattern. As time passes (left to right), the boundaries between language variants tend to shorten in length. One can also see evidence of boundary lines fixing to river inlets and other coastal indentations. Credit: James Burridge, University of Portsmouth

Burridge has written an Aug. 2, 2017 essay for The Conversation which delves into the history of using physics and mathematics to understand social systems and further explains his own theory (Note: Links have been removed),

What do the physics of bubbles have in common with the way you and I speak? Not a lot, you might think. But my recently published research uses the physics of surface tension (the effect that determines the shape of bubbles) to explore language patterns – where and how dialects occur.

This connection between physical and social systems may seem surprising, but connections of this kind have a long history. The 19th century physicist Ludwig Boltzmann spent much of his life trying to explain how the physical world behaves based on some simple assumptions about the atoms from which it is made. His theories, which link atomic behaviour to the large scale properties of matter, are called “statistical mechanics”. At the time, there was considerable doubt that atoms even existed, so Boltzmann’s success is remarkable because the detailed properties of the systems he was studying were unknown.

The idea that details don’t matter when you are considering a very large number of interacting agents is tantalising for those interested in the collective behaviour of large groups of people. In fact, this idea can be traced back to another 19th century great, Leo Tolstoy, who argued in War and Peace:

“To elicit the laws of history we must leave aside kings, ministers, and generals, and select for study the homogeneous, infinitesimal elements which influence the masses.”

Mathematical history

Tolstoy was, in modern terms, advocating a statistical mechanics of history. But in what contexts will this approach work? If we are guided by what worked for Boltzmann, then the answer is quite simple. We need to look at phenomena which arise from large numbers of interactions between individuals rather than phenomena imposed from above by some mighty ruler or political movement.

To test a physical theory, one just needs a lab. But a mathematical historian must look for data that have already been collected, or can be extracted from existing sources. An ideal example is language dialects. For centuries, humans have been drawing maps of the spatial domains in which they live, creating records of their languages, and sometimes combining the two to create linguistic atlases. The geometrical picture which emerges is fascinating. As we travel around a country, the way that people use language, from their choices of words to their pronunciation of vowels, changes. Researchers quantify differences using “linguistic variables”.

For example, in 1950s England, the ulex shrub went by the name “gorse”, “furze”, “whim” or “broom” depending on where you were in the country. If we plot where these names are used on a map, we find large regions where one name is in common use, and comparatively narrow transition regions where the most common word changes. Linguists draw lines, called “isoglosses”, around the edges of regions where one word (or other linguistic variable) is common. As you approach an isogloss, you find people start to use a different word for the same thing.

A similar effect can be seen in sheets of magnetic metal where individual atoms behave like miniature magnets which want to line up with their neighbours. As a result, large regions appear in which the magnetic directions of all atoms are aligned. If we think of magnetic direction as an analogy for choice of linguistic variant – say up is “gorse” and down is “broom” – then aligning direction is like beginning to use the local word for ulex.

Linguistic maths

I made just one assumption about language evolution: that people tend to pick up ways of speaking which they hear in the geographical region where they spend most of their time. Typically, this region will be a few miles or tens of miles wide and centred on their home, but its shape may be skewed by the presence of a nearby city which they visit more often than the surrounding countryside.

For example, in 1950s England, the ulex shrub went by the name “gorse”, “furze”, “whim” or “broom” depending on where you were in the country. If we plot where these names are used on a map, we find large regions where one name is in common use, and comparatively narrow transition regions where the most common word changes. Linguists draw lines, called “isoglosses”, around the edges of regions where one word (or other linguistic variable) is common. As you approach an isogloss, you find people start to use a different word for the same thing.

My equations predict that isoglosses tend to get pushed away from cities, and drawn towards parts of the coast which are indented, like bays or river mouths. The city effect can be explained by imagining you live near an isogloss at the edge of a city. Because there are a lot more people on the city side of the isogloss, you will tend to have more conversations with them than with rural people living on the other side. For this reason, you will probably start using the linguistic variable used in the city. If lots of people do this, then the isogloss will move further out into the countryside.

My one simple assumption – that people pick up local ways of speaking – leading to equations which describe the physics of bubbles, allowed me to gain new insight into the formation of language patterns. Who knows what other linguistic patterns mathematics could explain?

Burridge’s paper can be found here,

Spatial Evolution of Human Dialects by James Burridge. Phys. Rev. X 7, 031008 Vol. 7, Iss. 3 — July – September 2017 Published 17 July 2017

This paper is open access and it is quite readable as these things go. In other words, you may not understand all of the mathematics, physics, or linguistics but it is written so that a relatively well informed person should be able to understand the basics if not all the nuances.

Are plants and brains alike?

The answer to the question about whether brains and plants are alike is the standard ‘yes and no’. That said, there are some startling similarities from a statistical perspective (from a July 6, 2017 Salk Institute news release (also received via email; Note: Links have been removed),

Plants and brains are more alike than you might think: Salk scientists discovered that the mathematical rules governing how plants grow are similar to how brain cells sprout connections. The new work, published in Current Biology on July 6, 2017, and based on data from 3D laser scanning of plants, suggests there may be universal rules of logic governing branching growth across many biological systems.

“Our project was motivated by the question of whether, despite all the diversity we see in plant forms, there is some form or structure they all share,” says Saket Navlakha, assistant professor in Salk’s Center for Integrative Biology and senior author of the paper. “We discovered that there is—and, surprisingly, the variation in how branches are distributed in space can be described mathematically by something called a Gaussian function, which is also known as a bell curve.”

Being immobile, plants have to find creative strategies for adjusting their architecture to address environmental challenges, like being shaded by a neighbor. The diversity in plant forms, from towering redwoods to creeping thyme, is a visible sign of these strategies, but Navlakha wondered if there was some unseen organizing principle at work. To find out, his team used high-precision 3D scanning technology to measure the architecture of young plants over time and quantify their growth in ways that could be analyzed mathematically.

“This collaboration arose from a conversation that Saket and I had shortly after his arrival at Salk,” says Professor and Director of the Plant Molecular and Cellular Biology Laboratory Joanne Chory, who, along with being the Howard H. and Maryam R. Newman Chair in Plant Biology, is also a Howard Hughes Medical Investigator and one of the paper’s coauthors. “We were able to fund our studies thanks to Salk’s innovation grant program and the Howard Hughes Medical Institute.”

The team began with three agriculturally valuable crops: sorghum, tomato and tobacco. The researchers grew the plants from seeds under conditions the plants might experience naturally (shade, ambient light, high light, high heat and drought). Every few days for a month, first author Adam Conn scanned each plant to digitally capture its growth. In all, Conn scanned almost 600 plants.

“We basically scanned the plants like you would scan a piece of paper,” says Conn, a Salk research assistant. “But in this case the technology is 3D and allows us to capture a complete form—the full architecture of how the plant grows and distributes branches in space.”

From left: Adam Conn and Saket Navlakha
From left: Adam Conn and Saket Navlakha Credit: Salk Institute

Each plant’s digital representation is called a point cloud, a set of 3D coordinates in space that can be analyzed computationally. With the new data, the team built a statistical description of theoretically possible plant shapes by studying the plant’s branch density function. The branch density function depicts the likelihood of finding a branch at any point in the space surrounding a plant.

This model revealed three properties of growth: separability, self-similarity and a Gaussian branch density function. Separability means that growth in one spatial direction is independent of growth in other directions. According to Navlakha, this property means that growth is very simple and modular, which may let plants be more resilient to changes in their environment. Self-similarity means that all the plants have the same underlying shape, even though some plants may be stretched a little more in one direction, or squeezed in another direction. In other words, plants don’t use different statistical rules to grow in shade than they do to grow in bright light. Lastly, the team found that, regardless of plant species or growth conditions, branch density data followed a Gaussian distribution that is truncated at the boundary of the plant. Basically, this says that branch growth is densest near the plant’s center and gets less dense farther out following a bell curve.

The high level of evolutionary efficiency suggested by these properties is surprising. Even though it would be inefficient for plants to evolve different growth rules for every type of environmental condition, the researchers did not expect to find that plants would be so efficient as to develop only a single functional form. The properties they identified in this work may help researchers evaluate new strategies for genetically engineering crops.

Previous work by one of the paper’s authors, Charles Stevens, a professor in Salk’s Molecular Neurobiology Laboratory, found the same three mathematical properties at work in brain neurons. “The similarity between neuronal arbors and plant shoots is quite striking, and it seems like there must be an underlying reason,” says Stevens. “Probably, they both need to cover a territory as completely as possible but in a very sparse way so they don’t interfere with each other.”

The next challenge for the team is to identify what might be some of the mechanisms at the molecular level driving these changes. Navlakha adds, “We could see whether these principles deviate in other agricultural species and maybe use that knowledge in selecting plants to improve crop yields.”

Should you not be able to access the news release, you can find the information in a July 6, 2017 news item on ScienceDaily.

For the paper, here’s a link and a citation,

A Statistical Description of Plant Shoot Architecture by Adam Conn, Ullas V. Pedmale4, Joanne Chory, Charles F. Stevens, Saket Navlakha. Current Biology DOI: Publication stage: In Press Corrected Proof July 2017

This paper is behind a paywall.

Here’s an image that illustrates the principles the researchers are attempting to establish,

This illustration represents how plants use the same rules to grow under widely different conditions (for example, cloudy versus sunny), and that the density of branches in space follows a Gaussian (“bell curve”) distribution, which is also true of neuronal branches in the brain. Credit: Salk Institute

Ancient Roman teaching methods for maths education?

I find this delightful (from a July 7,2017 news item on,

Schoolchildren from across the region have been learning different ways to engage with maths, as part of a series of ancient Roman classroom days held at the University of Reading [UK].

Organised by the University’s Department of Classics, the Reading Ancient Schoolroom event saw pupils undertake a series of ancient-style school exercises, including doing multiplication, division, and calculating compound interest with Roman numerals. A key difference between how maths was taught then and now is that sums were not written down in ancient Roman times – instead an abacus or a counting board with dried beans was used.

In addition, school children in antiquity were taught individually by the teacher and worked on their own assignments, rather than being taught as a whole class. This meant pupils were able to work at their own rate of ability.

A July 6, 2017 University of Reading press release, which originated the news item, expands on the theme,

Professor Eleanor Dickey, who organised the series of events, said: ‘We’ve been running these ancient schoolroom days for a few years now and what we’ve learnt during that time is that the children really engage with the ancient teaching methods, especially when it comes to maths. We’ve found that children who aren’t naturally gifted at maths actually enjoy using the abacus and counting boards and this helps to stimulate their interest and learning of the subject.

“As follow up to the day we provide teachers with a pack of teaching materials they can take back to their own classroom and this includes instructions on how to make a counting board, as well as other maths-related and non-maths-related activities. It is my hope that some of these ancient methods can help to further modern teaching practice.”

Other activities on the day included reading poetry written without word division or punctuation, learning to write with a stylus on a wax tablet and reading from papyrus scrolls. Wearing Roman costumes, students also got to sample some authentic Roman food and handle objects from the University of Reading’s Ure Museum of Greek Archaeology.

Professor Dickey continued: “Ancient education methods, by being very different from our own, help us better appreciate both the advantages and the disadvantages of our own system, and show that doing things our way is neither natural nor inevitable.

“The ancient Roman school days are also a great way to get children interested in history more generally.”

The research which helped determine what a day in an ancient Roman classroom was like came from Professor Dickey’s discovery and translation of a set of ancient textbooks describing what children did in school. Parts of these historical records were published last year in a book by Professor Dickey: Learning Latin the Ancient Way: Latin Textbooks in the Ancient World, published by Cambridge University Press.

In hindsight it seems obvious. Of course an abacus helps with learning as it’s more engaging. You get to make a range of gestures and you make sounds (the clicking of the abacus beads) neither of which are  typically part of the maths experience. Then, there’s the individualized attention and your own special maths problems.

Democracy through mathematics

Prime Minister Justin Trudeau promised electoral reform before he and his party won the 2015 Canadian federal election. In February 2017, Trudeau’s government abandoned any and all attempts at electoral reform (see Feb. 1, 2017 article by Laura Stone about the ‘broken’ promise for the Globe and Mail). Months later, the issue lingers on.

Anyone who places the cross for a candidate in a democratic election assumes the same influence as all other voters. Therefore, as far as the population is concerned, the constituencies should be as equal as possible. (Photo: Fotolia / Stockfotos-MG)

While this research doesn’t address the issue of how to change the system so that votes might be more meaningful especially in districts where the outcome of any election is all but guaranteed, it does suggest there are better ways of changing the electoral map (redistricting), from a June 12, 2017 Technical University of Munich (TUM) press release (also on EurekAlert but dated June 23, 2017),

For democratic elections to be fair, voting districts must have similar sizes. When populations shift, districts need to be redistributed – a complex and, in many countries, controversial task when political parties attempt to influence redistricting. Mathematicians at the Technical University of Munich (TUM) have now developed a method that allows the efficient calculation of optimally sized voting districts.

When constituents cast their vote for a candidate, they assume it carries the same weight as that of the others. Voting districts should thus be sized equally according to population. When populations change, boundaries need to be redrawn.

For example, 34 political districts were redrawn for the upcoming parliamentary election in Germany – a complex task. In other countries, this process often results in major controversy. Political parties often engage in gerrymandering, to create districts with a disproportionately large number of own constituents. In the United States, for example, state governments frequently exert questionable influence when redrawing the boundaries of congressional districts.

“An effective and neutral method for political district zoning, which sounds like an administrative problem, is actually of great significance from the perspective of democratic theory,” emphasizes Stefan Wurster, Professor of Policy Analysis at the Bavarian School of Public Policy at TUM. “The acceptance of democratic elections is in danger whenever parties or individuals gain an advantage out of the gate. The problem becomes particularly relevant when the allocation of parliamentary seats is determined by the number of direct mandates won. This is the case in majority election systems like in USA, Great Britain and France.”
Test case: German parliamentary election

Prof. Peter Gritzmann, head of the Chair of Applied Geometry and Discrete Mathematics at TUM, in collaboration with his staff member Fabian Klemm and his colleague Andreas Brieden, professor of statistics at the University of the German Federal Armed Forces, has developed a methodology that allows the optimal distribution of electoral district boundaries to be calculated in an efficient and, of course, politically neutral manner.

The mathematicians tested their methodology using electoral districts of the German parliament. According to the German Federal Electoral Act, the number of constituents in a district should not deviate more than 15 percent from the average. In cases where the deviation exceeds 25 percent, electoral district borders must be redrawn. In this case, the relevant election commission must adhere to various provisions: For example, districts must be contiguous and not cross state, county or municipal boundaries. The electoral districts are subdivided into precincts with one polling station each.
Better than required by law

“There are more ways to consolidate communities to electoral districts than there are atoms in the known universe,” says Peter Gritzmann. “But, using our model, we can still find efficient solutions in which all districts have roughly equal numbers of constituents – and that in a ‘minimally invasive’ manner that requires no voter to switch precincts.”

Deviations of 0.3 to 8.7 percent from the average size of electoral districts cannot be avoided based solely on the different number of voters in individual states. But the new methodology achieves this optimum. “Our process comes close to the theoretical limit in every state, and we end up far below the 15 percent deviation allowed by law,” says Gritzmann.
Deployment possible in many countries

The researchers used a mathematical model developed in the working group to calculate the electoral districts: “Geometric clustering” groups the communities to clusters, the optimized electoral districts. The target definition for calculations can be arbitrarily modified, making the methodology applicable to many countries with different election laws.

The methodology is also applicable to other types of problems: for example, in voluntary lease and utilization exchanges in agriculture, to determine adequate tariff groups for insurers or to model hybrid materials. “However, drawing electoral district boundaries is a very special application, because here mathematics can help strengthen democracies,” sums up Gritzmann.

Although the electoral wards for the German election were newly tailored in 2012, already in 2013, the year of the election, population changes led to deviations above the desired maximum value in some of them (left). The mathematical method results in significantly lower deviations, thus providing better fault tolerance. (Image: F. Klemm / TUM)


Here’s a link to and a citation for the paper,

Constrained clustering via diagrams: A unified theory and its application to electoral district design by Andreas Brieden, Peter Gritzmann, Fabian Klemma. European Journal of Operational Research Volume 263, Issue 1, 16 November 2017, Pages 18–34

This paper is behind a paywall.

While the redesign of electoral districts has been a contentious issue federally and provincially in Canada (and I imagine in municipalities where this is representation by districts), the focus for electoral reform had been on eliminating the ‘first-past-the-post’ system and replacing it with something new. Apparently, there is also some interest in the US. A June 27, 2017 article by David Daley for describes one such initiative,

Some people blame gerrymandering, while others cite geography or rage against dark money. All are corrupting factors. All act as accelerants on the underlying issue: Our winner-take-all [first-ast-the-post]system of districting that gives all the seats to the side with 50 percent plus one vote and no representation to the other 49.9 percent. We could end gerrymandering tomorrow and it wouldn’t help the unrepresented Republicans in Connecticut, or Democrats in Kansas, feel like they had a voice in Congress.

A Virginia congressman wants to change this. Rep. Don Beyer, a Democrat, introduced something called the Fair Representation Act this week. Beyer aims to wipe out today’s map of safe red and blue seats and replace them with larger, multimember districts (drawn by nonpartisan commissions) of three, four or five representatives. Smaller states would elect all members at large. All members would then be elected with ranked-choice voting. That would ensure that as many voters as possible elect a candidate of their choice: In a multimember district with five seats, for example, a candidate could potentially win with one-sixth of the vote.

This is how you fix democracy. The larger districts would help slay the gerrymander. A ranked-choice system would eliminate our zero-sum, winner-take-all politics. Leadership of the House would belong to the side with the most votes — unlike in 2012, for example, when Democratic House candidates received 1.4 million more votes than Republicans, but the GOP maintained a 33-seat majority. No wasted votes and no spoilers, bridge builders in Congress, and (at least in theory) less negative campaigning as politicians vied to be someone’s second choice if not their first. There’s a lot to like here.

There are other similar schemes but the idea is always to reestablish the primacy (meaningfulness) of a vote and to achieve better representation of the country’s voters and interests. As for the failed Canadian effort, such as it was, the issue’s failure to fade away hints that Canadian politicians at whatever jurisdictional level they inhabit might want to tackle the situation a little more seriously than they have previously.

Mathematics/Music/Art/Architecture/Education/Culture: Bridges 2017 conference in Waterloo, Canada

Bridges 2017 will be held in Waterloo, Canada from July 27 – 31, 2017. Here’s the invitation which was released last year,

To give you a sense of the range offered, here’s more from Bridges 2017 events page,

Every Bridges conference includes a number of events other than paper presentations. Please click on one of the events below to learn more about it.

UWAG Exhibition

The University of Waterloo Art Gallery (UWAG) has partnered with Bridges to create an exhibition of five local artists who explore mathematical themes in their work. The exhibition runs concurrently with the conference.


Theatre Night

An evening dramatic performance that explores themes of art, mathematics and teaching, performed by Peter Taylor and Judy Wearing from Queen’s University.


Formal Music Night

An evening concert of mathematical choral music, performed by a specially-formed ensemble of choristers and professional soloists.


Family Day

An afternoon of community activities, games, workshops, interactive demonstrations, presentations, performances, and art exhibitions for children and adults, free and open to all.


Poetry Reading

A session of invited readings of poetry exploring mathematical themes, in a wide range of styles. Attendees will also be invited to share their own poetry in an open mic session. A printed anthology will be available at the conference.


Informal Music Night

A longstanding tradition at Bridges—a casual variety show in which all conference participants are invited to share their talents, musical or otherwise, with a brief performance.

I have some more details about the exhibition at the University of Waterloo Art Gallery (UWAG) from a July 19, 2017 ArtSci Salon notice received via email,

P A S S A G E  +  O B S T A C L E

JULY 27–30


PASSAGE + OBSTACLE features a selection of work by multidisciplinary
area artists Patrick Cull, Paul Dignan, Laura De Decker, Soheila
Esfahani, and Andrew James Smith. Sharing a rigorous approach to
materials and subject matter, their artworks parallel Bridges’ stated
goal to explore “mathematical connections in art, music, architecture,
education and culture”. The exhibition sets out to complement and
expand on the theme by contrasting subtle and overt links between the
use of geometry, pattern, and optical effects across mediums ranging
from painting and installation to digital media. Using the bridge as a
metaphor, the artworks can be appreciated as a means of getting from A
to B by overcoming obstructions, whether perceptual or otherwise.


University of Waterloo Art Gallery
East Campus Hall 1239
519.888.4567 ext. 33575 [9] [10]

Ivan Jurakic, Director / Curator
519.888.4567 ext. 36741

263 Phillip Street, Waterloo
East Campus Hall (ECH) is located north of University Avenue West
across from Engineering 6

Visitor Parking is available in Lot E6 or Q for a flat rate of $5 [11]

University of Waterloo Art Gallery
200 University Avenue West
Waterloo, ON, Canada N2L 3G1

You can find out more about Bridges 2017 including how to register here (the column on the left provides links to registration, program, and more information.


Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates on more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.


About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain.

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 |

This paper is open access.

Time traveling at the University of British Columbia

Anyone who dreams of timetraveling is going to have to wait a bit longer as this form of timetraveling is theoretical. From an April 27, 2017 news item on ScienceDaily,

After some serious number crunching, a UBC [University of British Columbia] researcher has come up with a mathematical model for a viable time machine.

Ben Tippett, a mathematics and physics instructor at UBC’s Okanagan campus, recently published a study about the feasibility of time travel. Tippett, whose field of expertise is Einstein’s theory of general relativity, studies black holes and science fiction when he’s not teaching. Using math and physics, he has created a formula that describes a method for time travel.

An April 27, 2017 UBC at Okanagan news release (also on EurekAlert), which originated the news item, elaborates on the work.

“People think of time travel as something fictional,” says Tippett. “And we tend to think it’s not possible because we don’t actually do it. But, mathematically, it is possible.”

Ever since H.G. Wells published his book Time Machine in 1885, people have been curious about time travel—and scientists have worked to solve or disprove the theory. In 1915 Albert Einstein announced his theory of general relativity, stating that gravitational fields are caused by distortions in the fabric of space and time. More than 100 years later, the LIGO Scientific Collaboration—an international team of physics institutes and research groups—announced the detection of gravitational waves generated by colliding black holes billions of light years away, confirming Einstein’s theory.

The division of space into three dimensions, with time in a separate dimension by itself, is incorrect, says Tippett. The four dimensions should be imagined simultaneously, where different directions are connected, as a space-time continuum. Using Einstein’s theory, Tippett explains that the curvature of space-time accounts for the curved orbits of the planets.

In “flat” or uncurved space-time, planets and stars would move in straight lines. In the vicinity of a massive star, space-time geometry becomes curved and the straight trajectories of nearby planets will follow the curvature and bend around the star.

“The time direction of the space-time surface also shows curvature. There is evidence showing the closer to a black hole we get, time moves slower,” says Tippett. “My model of a time machine uses the curved space-time—to bend time into a circle for the passengers, not in a straight line. That circle takes us back in time.”

While it is possible to describe this type of time travel using a mathematical equation, Tippett doubts that anyone will ever build a machine to make it work.

“H.G. Wells popularized the term ‘time machine’ and he left people with the thought that an explorer would need a ‘machine or special box’ to actually accomplish time travel,” Tippett says. “While is it mathematically feasible, it is not yet possible to build a space-time machine because we need materials—which we call exotic matter—to bend space-time in these impossible ways, but they have yet to be discovered.”

For his research, Tippett created a mathematical model of a Traversable Acausal Retrograde Domain in Space-time (TARDIS). He describes it as a bubble of space-time geometry which carries its contents backward and forward through space and time as it tours a large circular path. The bubble moves through space-time at speeds greater than the speed of light at times, allowing it to move backward in time.

“Studying space-time is both fascinating and problematic. And it’s also a fun way to use math and physics,” says Tippett. “Experts in my field have been exploring the possibility of mathematical time machines since 1949. And my research presents a new method for doing it.”

Here’s a link to and a citation for the paper,

Traversable acausal retrograde domains in spacetime by Benjamin K Tippett and David Tsang. Classical and Quantum Gravity, Volume 34, Number 9 DOI: Published 31 March 2017

© 2017 IOP Publishing Ltd

This paper is behind a paywall.