Monthly Archives: July 2017

Robot artists—should they get copyright protection

Clearly a lawyer wrote this June 26, 2017 essay on theconversation.com (Note: A link has been removed),

When a group of museums and researchers in the Netherlands unveiled a portrait entitled The Next Rembrandt, it was something of a tease to the art world. It wasn’t a long lost painting but a new artwork generated by a computer that had analysed thousands of works by the 17th-century Dutch artist Rembrandt Harmenszoon van Rijn.

The computer used something called machine learning [emphasis mine] to analyse and reproduce technical and aesthetic elements in Rembrandt’s works, including lighting, colour, brush-strokes and geometric patterns. The result is a portrait produced based on the styles and motifs found in Rembrandt’s art but produced by algorithms.

But who owns creative works generated by artificial intelligence? This isn’t just an academic question. AI is already being used to generate works in music, journalism and gaming, and these works could in theory be deemed free of copyright because they are not created by a human author.

This would mean they could be freely used and reused by anyone and that would be bad news for the companies selling them. Imagine you invest millions in a system that generates music for video games, only to find that music isn’t protected by law and can be used without payment by anyone in the world.

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

It could have been someone involved in the technology but nobody with that background would write “… something called machine learning … .”  Andres Guadamuz, lecturer in Intellectual Property Law at the University of Sussex, goes on to say (Note: Links have been removed),

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

That doesn’t mean that copyright should be awarded to the computer, however. Machines don’t (yet) have the rights and status of people under the law. But that doesn’t necessarily mean there shouldn’t be any copyright either. Not all copyright is owned by individuals, after all.

Companies are recognised as legal people and are often awarded copyright for works they don’t directly create. This occurs, for example, when a film studio hires a team to make a movie, or a website commissions a journalist to write an article. So it’s possible copyright could be awarded to the person (company or human) that has effectively commissioned the AI to produce work for it.

 

Things are likely to become yet more complex as AI tools are more commonly used by artists and as the machines get better at reproducing creativity, making it harder to discern if an artwork is made by a human or a computer. Monumental advances in computing and the sheer amount of computational power becoming available may well make the distinction moot. At that point, we will have to decide what type of protection, if any, we should give to emergent works created by intelligent algorithms with little or no human intervention.

The most sensible move seems to follow those countries that grant copyright to the person who made the AI’s operation possible, with the UK’s model looking like the most efficient. This will ensure companies keep investing in the technology, safe in the knowledge they will reap the benefits. What happens when we start seriously debating whether computers should be given the status and rights of people is a whole other story.

The team that developed a ‘new’ Rembrandt produced a video about the process,

Mark Brown’s April 5, 2016 article abut this project (which was unveiled on April 5, 2017 in Amsterdam, Netherlands) for the Guardian newspaper provides more detail such as this,

It [Next Rembrandt project] is the result of an 18-month project which asks whether new technology and data can bring back to life one of the greatest, most innovative painters of all time.

Advertising executive [Bas] Korsten, whose brainchild the project was, admitted that there were many doubters. “The idea was greeted with a lot of disbelief and scepticism,” he said. “Also coming up with the idea is one thing, bringing it to life is another.”

The project has involved data scientists, developers, engineers and art historians from organisations including Microsoft, Delft University of Technology, the Mauritshuis in The Hague and the Rembrandt House Museum in Amsterdam.

The final 3D printed painting consists of more than 148 million pixels and is based on 168,263 Rembrandt painting fragments.

Some of the challenges have been in designing a software system that could understand Rembrandt based on his use of geometry, composition and painting materials. A facial recognition algorithm was then used to identify and classify the most typical geometric patterns used to paint human features.

It sounds like it was a fascinating project but I don’t believe ‘The Next Rembrandt’ is an example of AI creativity or an example of the ‘creative spark’ Guadamuz discusses. This seems more like the kind of work  that could be done by a talented forger or fraudster. As I understand it, even when a human creates this type of artwork (a newly discovered and unknown xxx masterpiece), the piece is not considered a creative work in its own right. Some pieces are outright fraudulent and others which are described as “in the manner of xxx.”

Taking a somewhat different approach to mine, Timothy Geigner at Techdirt has also commented on the question of copyright and AI in relation to Guadamuz’s essay in a July 7, 2017 posting,

Unlike with earlier computer-generated works of art, machine learning software generates truly creative works without human input or intervention. AI is not just a tool. While humans program the algorithms, the decision making – the creative spark – comes almost entirely from the machine.

Let’s get the easy part out of the way: the culminating sentence in the quote above is not true. The creative spark is not the artistic output. Rather, the creative spark has always been known as the need to create in the first place. This isn’t a trivial quibble, either, as it factors into the simple but important reasoning for why AI and machines should certainly not receive copyright rights on their output.

That reasoning is the purpose of copyright law itself. Far too many see copyright as a reward system for those that create art rather than what it actually was meant to be: a boon to an artist to compensate for that artist to create more art for the benefit of the public as a whole. Artificial intelligence, however far progressed, desires only what it is programmed to desire. In whatever hierarchy of needs an AI might have, profit via copyright would factor either laughably low or not at all into its future actions. Future actions of the artist, conversely, are the only item on the agenda for copyright’s purpose. If receiving a copyright wouldn’t spur AI to create more art beneficial to the public, then copyright ought not to be granted.

Geigner goes on (July 7, 2017 posting) to elucidate other issues with the ideas expressed in the general debates of AI and ‘rights’ and the EU’s solution.

In scientific race US sees China coming up from rear

Sometime it seems as if scientific research is like a race with everyone competing for first place. As in most sports, there are multiple competitions for various sub-groups but only one important race. The US has held the lead position for decades although always with some anxiety. These days the anxiety is focused on China. A June 15, 2017 news item on ScienceDaily suggests that US dominance is threatened in at least one area of research—the biomedical sector,

American scientific teams still publish significantly more biomedical research discoveries than teams from any other country, a new study shows, and the U.S. still leads the world in research and development expenditures.

But American dominance is slowly shrinking, the analysis finds, as China’s skyrocketing investing on science over the last two decades begins to pay off. Chinese biomedical research teams now rank fourth in the world for total number of new discoveries published in six top-tier journals, and the country spent three-quarters what the U.S. spent on research and development during 2015.

Meanwhile, the analysis shows, scientists from the U.S. and other countries increasingly make discoveries and advancements as part of teams that involve researchers from around the world.

A June 15, 2017 Michigan Medicine University of Michigan news release (also on EurekAlert), which originated the news item, details the research team’s insights,

The last 15 years have ushered in an era of “team science” as research funding in the U.S., Great Britain and other European countries, as well as Canada and Australia, stagnated. The number of authors has also grown over time. For example, in 2000 only two percent of the research papers the new study looked include 21 or more authors — a number that increased to 12.5 percent in 2015.

The new findings, published in JCI Insight by a team of University of Michigan researchers, come at a critical time for the debate over the future of U.S. federal research funding. The study is based on a careful analysis of original research papers published in six top-tier and four mid-tier journals from 2000 to 2015, in addition to data on R&D investment from those same years.

The study builds on other work that has also warned of America’s slipping status in the world of science and medical research, and the resulting impact on the next generation of aspiring scientists.

“It’s time for U.S. policy-makers to reflect and decide whether the year-to-year uncertainty in National Institutes of Health budget and the proposed cuts are in our societal and national best interest,” says Bishr Omary, M.D., Ph.D., senior author of the new data-supported opinion piece and chief scientific officer of Michigan Medicine, U-M’s academic medical center. “If we continue on the path we’re on, it will be harder to maintain our lead and, even more importantly, we could be disenchanting the next generation of bright and passionate biomedical scientists who see a limited future in pursuing a scientist or physician-investigator career.”

The analysis charts South Korea’s entry into the top 10 countries for publications, as well as China’s leap from outside the top 10 in 2000 to fourth place in 2015. They also track the major increases in support for research in South Korea and Singapore since the start of the 21st Century.

Meticulous tracking

First author of the study, U-M informationist Marisa Conte, and Omary co-led a team that looked carefully at the currency of modern science: peer-reviewed basic science and clinical research papers describing new findings, published in journals with long histories of accepting among the world’s most significant discoveries.

They reviewed every issue of six top-tier international journals (JAMA, Lancet, the New England Journal of Medicine, Cell, Nature and Science), and four mid-ranking journals (British Medical Journal, JAMA Internal Medicine, Journal of Cell Science, FASEB Journal), chosen to represent the clinical and basic science aspects of research.

The analysis included only papers that reported new results from basic research experiments, translational studies, clinical trials, metanalyses, and studies of disease outcomes. Author affiliations for corresponding authors and all other authors were recorded by country.

The rise in global cooperation is striking. In 2000, 25 percent of papers in the six top-tier journals were by teams that included researchers from at least two countries. In 2015, that figure was closer to 50 percent. The increasing need for multidisciplinary approaches to make major advances, coupled with the advances of Internet-based collaboration tools, likely have something to do with this, Omary says.

The authors, who also include Santiago Schnell, Ph.D. and Jing Liu, Ph.D., note that part of their group’s interest in doing the study sprang from their hypothesis that a flat NIH budget is likely to have negative consequences but they wanted to gather data to test their hypothesis.

They also observed what appears to be an increasing number of Chinese-born scientists who had trained in the U.S. going back to China after their training, where once most of them would have sought to stay in the U.S. In addition, Singapore has been able to recruit several top notch U.S. and other international scientists due to their marked increase in R&D investments.

The same trends appear to be happening in Great Britain, Australia, Canada, France, Germany and other countries the authors studied – where research investing has stayed consistent when measured as a percentage of the U.S. total over the last 15 years.

The authors note that their study is based on data up to 2015, and that in the current 2017 federal fiscal year, funding for NIH has increased thanks to bipartisan Congressional appropriations. The NIH contributes to most of the federal support for medical and basic biomedical research in the U.S. But discussion of cuts to research funding that hinders many federal agencies is in the air during the current debates for the 2018 budget. Meanwhile, the Chinese R&D spending is projected to surpass the U.S. total by 2022.

“Our analysis, albeit limited to a small number of representative journals, supports the importance of financial investment in research,” Omary says. “I would still strongly encourage any child interested in science to pursue their dream and passion, but I hope that our current and future investment in NIH and other federal research support agencies will rise above any branch of government to help our next generation reach their potential and dreams.”

Here’s a link to and a citation for the paper,

Globalization and changing trends of biomedical research output by Marisa L. Conte, Jing Liu, Santiago Schnell, and M. Bishr Omary. JCI Insight. 2017;2(12):e95206 doi:10.1172/jci.insight.95206 Volume 2, Issue 12 (June 15, 2017)

Copyright © 2017, American Society for Clinical Investigation

This paper is open access.

The notion of a race and looking back to see who, if anyone, is gaining on you reminded me of a local piece of sports lore, the Roger Banister-John Landy ‘Miracle Mile’. In the run up to the 1954 Commonwealth Games held in Vancouver, Canada, two runners were known to have broken the 4-minute mile limit (previously thought to have been impossible) and this meeting was considered an historic meeting. Here’s more from the miraclemile1954.com website,

On August 7, 1954 during the British Empire and Commonwealth Games in Vancouver, B.C., England’s Roger Bannister and Australian John Landy met for the first time in the one mile run at the newly constructed Empire Stadium.

Both men had broken the four minute barrier previously that year. Bannister was the first to break the mark with a time of 3:59.4 on May 6th in Oxford, England. Subsequently, on June 21st in Turku, Finland, John Landy became the new record holder with an official time of 3:58.

The world watched eagerly as both men approached the starting blocks. As 35,000 enthusiastic fans looked on, no one knew what would take place on that historic day.

Promoted as “The Mile of the Century”, it would later be known as the “Miracle Mile”.

With only 90 yards to go in one of the world’s most memorable races, John Landy glanced over his left shoulder to check his opponent’s position. At that instant Bannister streaked by him to victory in a Commonwealth record time of 3:58.8. Landy’s second place finish in 3:59.6 marked the first time the four minute mile had been broken by two men in the same race.

The website hosts an image of the moment memorialized in bronze when Landy looks to his left as Banister passes him on his right,

By Statue: Jack HarmanPhoto: Paul Joseph from vancouver, bc, canada – roger bannister running the four minute mileUploaded by Skeezix1000, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=9801121

Getting back to science, I wonder if some day we’ll stop thinking of it as a race where, inevitably, there’s one winner and everyone else loses and find a new metaphor.

Robots and a new perspective on disability

I’ve long wondered about how disabilities would be viewed in a future (h/t May 4, 2017 news item on phys.org) where technology could render them largely irrelevant. A May 4, 2017 essay by Thusha (Gnanthusharan) Rajendran of Heriot-Watt University on TheConversation.com provides a perspective on the possibilities (Note: Links have been removed),

When dealing with the otherness of disability, the Victorians in their shame built huge out-of-sight asylums, and their legacy of “them” and “us” continues to this day. Two hundred years later, technologies offer us an alternative view. The digital age is shattering barriers, and what used to the norm is now being challenged.

What if we could change the environment, rather than the person? What if a virtual assistant could help a visually impaired person with their online shopping? And what if a robot “buddy” could help a person with autism navigate the nuances of workplace politics? These are just some of the questions that are being asked and which need answers as the digital age challenges our perceptions of normality.

The treatment of people with developmental conditions has a chequered history. In towns and cities across Britain, you will still see large Victorian buildings that were once places to “look after” people with disabilities, that is, remove them from society. Things became worse still during the time of the Nazis with an idealisation of the perfect and rejection of Darwin’s idea of natural diversity.

Today we face similar challenges about differences versus abnormalities. Arguably, current diagnostic systems do not help, because they diagnose the person and not “the system”. So, a child has challenging behaviour, rather than being in distress; the person with autism has a communication disorder rather than simply not being understood.

Natural-born cyborgs

In contrast, the digital world is all about systems. The field of human-computer interaction is about how things work between humans and computers or robots. Philosopher Andy Clark argues that humans have always been natural-born cyborgs – that is, we have always used technology (in its broadest sense) to improve ourselves.

The most obvious example is language itself. In the digital age we can become truly digitally enhanced. How many of us Google something rather than remembering it? How do you feel when you have no access to wi-fi? How much do we favour texting, tweeting and Facebook over face-to-face conversations? How much do we love and need our smartphones?

In the new field of social robotics, my colleagues and I are developing a robot buddy to help adults with autism to understand, for example, if their boss is pleased or displeased with their work. For many adults with autism, it is not the work itself that stops from them from having successful careers, it is the social environment surrounding work. From the stress-inducing interview to workplace politics, the modern world of work is a social minefield. It is not easy, at times, for us neurotypticals, but for a person with autism it is a world full contradictions and implied meaning.

Rajendra goes on to highlight efforts with autistic individuals; he also includes this video of his December 14, 2016 TEDx Heriot-Watt University talk, which largely focuses on his work with robots and autism  (Note: This runs approximately 15 mins.),

The talk reminded me of a Feb. 6, 2017 posting (scroll down about 33% of the way) where I discussed a recent book about science communication and its failure to recognize the importance of pop culture in that endeavour. As an example, I used a then recent announcement from MIT (Massachusetts Institute of Technology) about their emotion detection wireless application and the almost simultaneous appearance of that application in a Feb. 2, 2017 episode of The Big Bang Theory (a popular US television comedy) featuring a character who could be seen as autistic making use of the emotion detection device.

In any event, the work described in the MIT news release is very similar to Rajendra’s albeit the communication is delivered to the public through entirely different channels: TEDx Talk and TheConversation.com (channels aimed at academics and those with academic interests) and a pop culture television comedy with broad appeal.

Nanotechnology-enabled warming textile being introduced at Berlin (Germany) Fashion Week July 4 – 7, 2017

Acanthurus GmbH, a Frankfurt-based (Germany) nanotechnology company announced its participation in Berlin Fashion Week’s (July 4 – 7, 2017) showcase for technology in fashion, Panorama Berlin  (according to Berlin Fashion Week’s Fashion Fair Highlights in July 2017 webpage; scroll down to Panorama Berlin subsection).

Here are more details about Acanthurus’ participation from a July 4, 2017 news item on innovationintextiles.com,

This week, Frankfurt-based nanotechnology company Acanthurus GmbH will introduce its innovative nanothermal warming textile technology nanogy at the Berlin FashionTech exhibition. An innovative warming technology was developed by Chinese market leader j-NOVA for the European market, under the brand name nanogy.

A July 3, 2017 nanogy press release, which originated the news item, offers another perspective on the story,

Too cold for your favorite dress? Leave your heavy coat at home and stay warm with ground-breaking nanotechnology instead.

Frankfurt-based nano technology company Acanthurus GmbH has brought an innovative warming technology developed by Chinese market leader j-NOVA© to the European market, under the brand name nanogy. “This will make freezing a thing of the past,” says Carsten Wortmann, founder and CEO of Acanthurus GmbH. The ultra-light, high-tech textiles can be integrated into any garment – including that go-to jacket everyone loves to wear on chilly days. All you need is a standard power bank to feel the warmth flow through your body, even on the coldest of days.

The innovative, lightweight technology is completely non-metallic, meaning it emits no radiation. The non-metallic nature of the technology allows it to be washed at any temperature, so there’s no need to worry about accidental spillages, whatever the circumstances. The technology is extremely thin and flexible and, as there is absolutely no metal included, can be scrunched or crumpled without damaging its function. This also means that the technology can be integrated into garments without any visible lines or hems, making it the optimal solution for fashion and textile companies alike.

nanogy measures an energy conversion rate of over 90%, making it one of the most sustainable and environmentally friendly warming solutions ever developed. The technology is also recyclable, so consumers can dispose of it as they would any other garment.

“Our focus is not just to provide world class technology, but also to improve people’s lives without harming our environment. We call this a nanothermal experience, and our current use cases have only covered a fraction of potential opportunities,” says Jeni Odley, Director of Acanthurus GmbH. As expected for any modern tech company, users can even control the temperature of the textile with a mobile app, making the integration of nanogy a simplified, one-touch experience.

I wasn’t able to find much about j-Nova but there was this from the ISPO Munich 2017 exhibitor details webpage,

j-NOVA.WORKS Co., Ltd.

4-B302, No. 328 Creative Industry Park, Xinhu St., Suzhou Industrial Park
215123 Jiangsu Prov.
China
P  +49 69 130277-70
F  +49 69 130277-75

As the new generation of warming technology, we introduce our first series of intelligent textiles: j-NOVA intelligent warming textiles.

The intelligent textiles are based on complex nano-technology, and maintain a constant temperature whilst preserving a low energy conversion rate. The technology can achieve an efficiency level of up to 90%, depending on its power source.

The combination of advanced nano material and intelligent modules bring warmth from the fabric and garment itself, which can be scrunched up or washed without affecting its function.

j-NOVA.WORKS aims to balance technology with tradition, and to improve the relationship between nature and humans.

Acanthurus GmbH is the sole European Distributor.

So, j-NOVA is the company with the nanotechnology and Acanthurus represents their interests in Europe. I wish I could find out more about the technology but this is the best I’ve been able to accomplish in the time I have available.

Brain stuff: quantum entanglement and a multi-dimensional universe

I have two brain news bits, one about neural networks and quantum entanglement and another about how the brain operates in* more than three dimensions.

Quantum entanglement and neural networks

A June 13, 2017 news item on phys.org describes how machine learning can be used to solve problems in physics (Note: Links have been removed),

Machine learning, the field that’s driving a revolution in artificial intelligence, has cemented its role in modern technology. Its tools and techniques have led to rapid improvements in everything from self-driving cars and speech recognition to the digital mastery of an ancient board game.

Now, physicists are beginning to use machine learning tools to tackle a different kind of problem, one at the heart of quantum physics. In a paper published recently in Physical Review X, researchers from JQI [Joint Quantum Institute] and the Condensed Matter Theory Center (CMTC) at the University of Maryland showed that certain neural networks—abstract webs that pass information from node to node like neurons in the brain—can succinctly describe wide swathes of quantum systems.

An artist’s rendering of a neural network with two layers. At the top is a real quantum system, like atoms in an optical lattice. Below is a network of hidden neurons that capture their interactions (Credit: E. Edwards/JQI)

A June 12, 2017 JQI news release by Chris Cesare, which originated the news item, describes how neural networks can represent quantum entanglement,

Dongling Deng, a JQI Postdoctoral Fellow who is a member of CMTC and the paper’s first author, says that researchers who use computers to study quantum systems might benefit from the simple descriptions that neural networks provide. “If we want to numerically tackle some quantum problem,” Deng says, “we first need to find an efficient representation.”

On paper and, more importantly, on computers, physicists have many ways of representing quantum systems. Typically these representations comprise lists of numbers describing the likelihood that a system will be found in different quantum states. But it becomes difficult to extract properties or predictions from a digital description as the number of quantum particles grows, and the prevailing wisdom has been that entanglement—an exotic quantum connection between particles—plays a key role in thwarting simple representations.

The neural networks used by Deng and his collaborators—CMTC Director and JQI Fellow Sankar Das Sarma and Fudan University physicist and former JQI Postdoctoral Fellow Xiaopeng Li—can efficiently represent quantum systems that harbor lots of entanglement, a surprising improvement over prior methods.

What’s more, the new results go beyond mere representation. “This research is unique in that it does not just provide an efficient representation of highly entangled quantum states,” Das Sarma says. “It is a new way of solving intractable, interacting quantum many-body problems that uses machine learning tools to find exact solutions.”

Neural networks and their accompanying learning techniques powered AlphaGo, the computer program that beat some of the world’s best Go players last year (link is external) (and the top player this year (link is external)). The news excited Deng, an avid fan of the board game. Last year, around the same time as AlphaGo’s triumphs, a paper appeared that introduced the idea of using neural networks to represent quantum states (link is external), although it gave no indication of exactly how wide the tool’s reach might be. “We immediately recognized that this should be a very important paper,” Deng says, “so we put all our energy and time into studying the problem more.”

The result was a more complete account of the capabilities of certain neural networks to represent quantum states. In particular, the team studied neural networks that use two distinct groups of neurons. The first group, called the visible neurons, represents real quantum particles, like atoms in an optical lattice or ions in a chain. To account for interactions between particles, the researchers employed a second group of neurons—the hidden neurons—which link up with visible neurons. These links capture the physical interactions between real particles, and as long as the number of connections stays relatively small, the neural network description remains simple.

Specifying a number for each connection and mathematically forgetting the hidden neurons can produce a compact representation of many interesting quantum states, including states with topological characteristics and some with surprising amounts of entanglement.

Beyond its potential as a tool in numerical simulations, the new framework allowed Deng and collaborators to prove some mathematical facts about the families of quantum states represented by neural networks. For instance, neural networks with only short-range interactions—those in which each hidden neuron is only connected to a small cluster of visible neurons—have a strict limit on their total entanglement. This technical result, known as an area law, is a research pursuit of many condensed matter physicists.

These neural networks can’t capture everything, though. “They are a very restricted regime,” Deng says, adding that they don’t offer an efficient universal representation. If they did, they could be used to simulate a quantum computer with an ordinary computer, something physicists and computer scientists think is very unlikely. Still, the collection of states that they do represent efficiently, and the overlap of that collection with other representation methods, is an open problem that Deng says is ripe for further exploration.

Here’s a link to and a citation for the paper,

Quantum Entanglement in Neural Network States by Dong-Ling Deng, Xiaopeng Li, and S. Das Sarma. Phys. Rev. X 7, 021021 – Published 11 May 2017

This paper is open access.

Blue Brain and the multidimensional universe

Blue Brain is a Swiss government brain research initiative which officially came to life in 2006 although the initial agreement between the École Politechnique Fédérale de Lausanne (EPFL) and IBM was signed in 2005 (according to the project’s Timeline page). Moving on, the project’s latest research reveals something astounding (from a June 12, 2017 Frontiers Publishing press release on EurekAlert),

For most people, it is a stretch of the imagination to understand the world in four dimensions but a new study has discovered structures in the brain with up to eleven dimensions – ground-breaking work that is beginning to reveal the brain’s deepest architectural secrets.

Using algebraic topology in a way that it has never been used before in neuroscience, a team from the Blue Brain Project has uncovered a universe of multi-dimensional geometrical structures and spaces within the networks of the brain.

The research, published today in Frontiers in Computational Neuroscience, shows that these structures arise when a group of neurons forms a clique: each neuron connects to every other neuron in the group in a very specific way that generates a precise geometric object. The more neurons there are in a clique, the higher the dimension of the geometric object.

“We found a world that we had never imagined,” says neuroscientist Henry Markram, director of Blue Brain Project and professor at the EPFL in Lausanne, Switzerland, “there are tens of millions of these objects even in a small speck of the brain, up through seven dimensions. In some networks, we even found structures with up to eleven dimensions.”

Markram suggests this may explain why it has been so hard to understand the brain. “The mathematics usually applied to study networks cannot detect the high-dimensional structures and spaces that we now see clearly.”

If 4D worlds stretch our imagination, worlds with 5, 6 or more dimensions are too complex for most of us to comprehend. This is where algebraic topology comes in: a branch of mathematics that can describe systems with any number of dimensions. The mathematicians who brought algebraic topology to the study of brain networks in the Blue Brain Project were Kathryn Hess from EPFL and Ran Levi from Aberdeen University.

“Algebraic topology is like a telescope and microscope at the same time. It can zoom into networks to find hidden structures – the trees in the forest – and see the empty spaces – the clearings – all at the same time,” explains Hess.

In 2015, Blue Brain published the first digital copy of a piece of the neocortex – the most evolved part of the brain and the seat of our sensations, actions, and consciousness. In this latest research, using algebraic topology, multiple tests were performed on the virtual brain tissue to show that the multi-dimensional brain structures discovered could never be produced by chance. Experiments were then performed on real brain tissue in the Blue Brain’s wet lab in Lausanne confirming that the earlier discoveries in the virtual tissue are biologically relevant and also suggesting that the brain constantly rewires during development to build a network with as many high-dimensional structures as possible.

When the researchers presented the virtual brain tissue with a stimulus, cliques of progressively higher dimensions assembled momentarily to enclose high-dimensional holes, that the researchers refer to as cavities. “The appearance of high-dimensional cavities when the brain is processing information means that the neurons in the network react to stimuli in an extremely organized manner,” says Levi. “It is as if the brain reacts to a stimulus by building then razing a tower of multi-dimensional blocks, starting with rods (1D), then planks (2D), then cubes (3D), and then more complex geometries with 4D, 5D, etc. The progression of activity through the brain resembles a multi-dimensional sandcastle that materializes out of the sand and then disintegrates.”

The big question these researchers are asking now is whether the intricacy of tasks we can perform depends on the complexity of the multi-dimensional “sandcastles” the brain can build. Neuroscience has also been struggling to find where the brain stores its memories. “They may be ‘hiding’ in high-dimensional cavities,” Markram speculates.

###

About Blue Brain

The aim of the Blue Brain Project, a Swiss brain initiative founded and directed by Professor Henry Markram, is to build accurate, biologically detailed digital reconstructions and simulations of the rodent brain, and ultimately, the human brain. The supercomputer-based reconstructions and simulations built by Blue Brain offer a radically new approach for understanding the multilevel structure and function of the brain. http://bluebrain.epfl.ch

About Frontiers

Frontiers is a leading community-driven open-access publisher. By taking publishing entirely online, we drive innovation with new technologies to make peer review more efficient and transparent. We provide impact metrics for articles and researchers, and merge open access publishing with a research network platform – Loop – to catalyse research dissemination, and popularize research to the public, including children. Our goal is to increase the reach and impact of research articles and their authors. Frontiers has received the ALPSP Gold Award for Innovation in Publishing in 2014. http://www.frontiersin.org.

Here’s a link to and a citation for the paper,

Cliques of Neurons Bound into Cavities Provide a Missing Link between Structure and Function by Michael W. Reimann, Max Nolte, Martina Scolamiero, Katharine Turner, Rodrigo Perin, Giuseppe Chindemi, Paweł Dłotko, Ran Levi, Kathryn Hess, and Henry Markram. Front. Comput. Neurosci., 12 June 2017 | https://doi.org/10.3389/fncom.2017.00048

This paper is open access.

*Feb. 3, 2021: ‘on’ changed to ‘in’

2017 S.NET annual meeting early bird registration open until July 14, 2017

The Society for the Study of New and Emerging Technologies (S.NET), which at one time was known as the Society for the Study of Nano and other Emerging Technologies, is holding its 2017 annual meeting in Arizona, US. Here’s more from a July 4, 2017 S.NET notice (received via email),

We have an exciting schedule planned for our 2017 meeting in Phoenix,
Arizona. Our confirmed plenary speakers –Professors Langdon Winner,
Alfred Nordmann and Ulrike Felt– and a diverse host of researchers from
across the planet promise to make this conference intellectually
engaging, as well as exciting.

If you haven’t already, make sure to register for the conference and the
dinner. THE DEADLINE HAS BEEN MOVED BACK TO JULY 14. 2017.

I tried to find more information about the meeting and discovered the meeting theme here in the February 2017 S.NET Newsletter,

October 9-11, 2017, Arizona State University, Tempe (USA)

Conference Theme: Engaging the Flux

Even the most seemingly stable entities fluctuate over time. Facts and artifacts, cultures and constitutions, people and planets. As the new and the old act, interact and intra-act within broader systems of time, space and meaning, we observe—and necessarily engage with—the constantly changing forms of socio-technological orders. As scholars and practitioners of new and emerging sciences and technologies, we are constantly tracking these moving targets, and often from within them. As technologists and researchers, we are also acutely aware that our research activities can influence the developmental trajectories of our objects of concern and study, as well as ourselves, our colleagues and the governance structures in which we live and work.

“Engaging the Flux” captures this sense that ubiquitous change is all about us, operative at all observable scales. “Flux” points to the perishability of apparently natural orders, as well as apparently stable technosocial orders. In embracing flux as its theme, the 2017 conference encourages participants to examine what the widely acknowledged acceleration of change reverberating across the planet means for the production of the technosciences, the social studies of knowledge production, art practices that engage technosciences and public deliberations about the societal significance of these practices in the contemporary moment.

This year’s conference theme aims to encourage us to examine the ways we—as scholars, scientists, artists, experts, citizens—have and have not taken into account the myriad modulations flowing and failing to flow from our engagements with our objects of study. The theme also invites us to anticipate how the conditions that partially structure these engagements may themselves be changing.

Our goal is to draw a rich range of examinations of flux and its implications for technoscientific and technocultural practices, broadly construed. Questions of specific interest include: Given the pervasiveness of political, ecological and technological fluctuations, what are the most socially responsible roles for experts, particularly in the context of policymaking? What would it mean to not merely accept perishability, but to lean into it, to positively embrace the going under of technological systems? What value can imaginaries offer in developing navigational capacities in periods of accelerated change? How can young and junior researchers —in social sciences, natural sciences, humanities or engineering— position themselves for meaningful, rewarding careers given the complementary uncertainties? How can the growing body of research straddling art and science communities help us make sense of flux and chart a course through it? What types of recalibrations are called for in order to speak effectively to diverse, and increasingly divergent, publics about the value of knowledge production and scientific rigor?

There are a few more details about the conference here on the  S.NET 2017 meeting registration page,

The ​2017 ​S. ​NET ​conference ​is ​held ​in ​Phoenix, ​Arizona ​(USA) ​and ​hosted ​by ​Arizona ​State ​University. ​ ​This ​year’s ​meeting ​will ​provide ​a ​forum ​for ​scholarly ​engagement ​and ​reflection ​on ​the ​meaning ​of ​coupled ​socio-technical ​change ​as ​a ​contemporary ​political ​phenomenon, ​a ​recurrent ​historical ​theme, ​and ​an ​object ​of ​future ​anticipation. ​ ​

HOTEL ​BLOCK ​- ​the ​new ​Marriott ​in ​downtown ​Phoenix ​has ​reserved ​rooms ​at ​$139 ​(single) ​or ​$159 ​(double ​bed). ​ ​ ​Please ​use ​the ​link ​on ​the ​S.Net ​home ​page ​to ​book ​your ​room. ​ ​

REGISTRATION ​for ​non-students: ​ ​
Early ​bird ​pricing ​is ​available ​until ​Saturday, ​July ​14, ​2017. ​ ​
Registration ​increases ​to ​$220 ​starting ​Sunday, ​July ​15, ​2017. ​
Start Your Registration
Select registrant type *
Select registrant type *
Faculty/Postdoc/private industry/gov employee ($175) Details
Student – submitting abstract or poster ($50)
Student – not submitting abstract or poster ($100)

There you have it.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Meet Pepper, a robot for health care clinical settings

A Canadian project to introduce robots like Pepper into clinical settings (aside: can seniors’ facilities be far behind?) is the subject of a June 23, 2017 news item on phys.org,

McMaster and Ryerson universities today announced the Smart Robots for Health Communication project, a joint research initiative designed to introduce social robotics and artificial intelligence into clinical health care.

A June 22, 2017 McMaster University news release, which originated the news item, provides more detail,

With the help of Softbank’s humanoid robot Pepper and IBM Bluemix Watson Cognitive Services, the researchers will study health information exchange through a state-of-the-art human-robot interaction system. The project is a collaboration between David Harris Smith, professor in the Department of Communication Studies and Multimedia at McMaster University, Frauke Zeller, professor in the School of Professional Communication at Ryerson University and Hermenio Lima, a dermatologist and professor of medicine at McMaster’s Michael G. DeGroote School of Medicine. His main research interests are in the area of immunodermatology and technology applied to human health.

The research project involves the development and analysis of physical and virtual human-robot interactions, and has the capability to improve healthcare outcomes by helping healthcare professionals better understand patients’ behaviour.

Zeller and Harris Smith have previously worked together on hitchBOT, the friendly hitchhiking robot that travelled across Canada and has since found its new home in the [Canada] Science and Technology Museum in Ottawa.

“Pepper will help us highlight some very important aspects and motives of human behaviour and communication,” said Zeller.

Designed to be used in professional environments, Pepper is a humanoid robot that can interact with people, ‘read’ emotions, learn, move and adapt to its environment, and even recharge on its own. Pepper is able to perform facial recognition and develop individualized relationships when it interacts with people.

Lima, the clinic director, said: “We are excited to have the opportunity to potentially transform patient engagement in a clinical setting, and ultimately improve healthcare outcomes by adapting to clients’ communications needs.”

At Ryerson, Pepper was funded by the Co-lab in the Faculty of Communication and Design. FCAD’s Co-lab provides strategic leadership, technological support and acquisitions of technologies that are shaping the future of communications.

“This partnership is a testament to the collaborative nature of innovation,” said dean of FCAD, Charles Falzon. “I’m thrilled to support this multidisciplinary project that pushes the boundaries of research, and allows our faculty and students to find uses for emerging tech inside and outside the classroom.”

“This project exemplifies the value that research in the Humanities can bring to the wider world, in this case building understanding and enhancing communications in critical settings such as health care,” says McMaster’s Dean of Humanities, Ken Cruikshank.

The integration of IBM Watson cognitive computing services with the state-of-the-art social robot Pepper, offers a rich source of research potential for the projects at Ryerson and McMaster. This integration is also supported by IBM Canada and [Southern Ontario Smart Computing Innovation Platform] SOSCIP by providing the project access to high performance research computing resources and staff in Ontario.

“We see this as the initiation of an ongoing collaborative university and industry research program to develop and test applications of embodied AI, a research program that is well-positioned to integrate and apply emerging improvements in machine learning and social robotics innovations,” said Harris Smith.

I just went to a presentation at the facility where my mother lives and it was all about delivering more individualized and better care for residents. Given that most seniors in British Columbia care facilities do not receive the number of service hours per resident recommended by the province due to funding issues, it seemed a well-meaning initiative offered in the face of daunting odds against success. Now with this news, I wonder what impact ‘Pepper’ might ultimately have on seniors and on the people who currently deliver service. Of course, this assumes that researchers will be able to tackle problems with understanding various accents and communication strategies, which are strongly influenced by culture and, over time, the aging process.

After writing that last paragraph I stumbled onto this June 27, 2017 Sage Publications press release on EurekAlert about a related matter,

Existing digital technologies must be exploited to enable a paradigm shift in current healthcare delivery which focuses on tests, treatments and targets rather than the therapeutic benefits of empathy. Writing in the Journal of the Royal Society of Medicine, Dr Jeremy Howick and Dr Sian Rees of the Oxford Empathy Programme, say a new paradigm of empathy-based medicine is needed to improve patient outcomes, reduce practitioner burnout and save money.

Empathy-based medicine, they write, re-establishes relationship as the heart of healthcare. “Time pressure, conflicting priorities and bureaucracy can make practitioners less likely to express empathy. By re-establishing the clinical encounter as the heart of healthcare, and exploiting available technologies, this can change”, said Dr Howick, a Senior Researcher in Oxford University’s Nuffield Department of Primary Care Health Sciences.

Technology is already available that could reduce the burden of practitioner paperwork by gathering basic information prior to consultation, for example via email or a mobile device in the waiting room.

During the consultation, the computer screen could be placed so that both patient and clinician can see it, a help to both if needed, for example, to show infographics on risks and treatment options to aid decision-making and the joint development of a treatment plan.

Dr Howick said: “The spread of alternatives to face-to-face consultations is still in its infancy, as is our understanding of when a machine will do and when a person-to-person relationship is needed.” However, he warned, technology can also get in the way. A computer screen can become a barrier to communication rather than an aid to decision-making. “Patients and carers need to be involved in determining the need for, and designing, new technologies”, he said.

I sincerely hope that the Canadian project has taken into account some of the issues described in the ’empathy’ press release and in the article, which can be found here,

Overthrowing barriers to empathy in healthcare: empathy in the age of the Internet
by J Howick and S Rees. Journaly= of the Royal Society of Medicine Article first published online: June 27, 2017 DOI: https://doi.org/10.1177/0141076817714443

This article is open access.