Tag Archives: artificial intelligence (AI)

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

AI (artificial intelligence) and a hummingbird robot

Every once in a while I stumble across a hummingbird robot story (my August 12, 2011 posting and my August 1, 2014 posting). Here’s what the hummingbird robot looks like now (hint: there’s a significant reduction in size),

Caption: Purdue University researchers are building robotic hummingbirds that learn from computer simulations how to fly like a real hummingbird does. The robot is encased in a decorative shell. Credit: Purdue University photo/Jared Pike

I think this is the first time I’ve seen one of these projects not being funded by the military, which explains why the researchers are more interested in using these hummingbird robots for observing wildlife and for rescue efforts in emergency situations. Still, they do acknowledge theses robots could also be used in covert operations.

From a May 9, 2019 news item on ScienceDaily,

What can fly like a bird and hover like an insect?

Your friendly neighborhood hummingbirds. If drones had this combo, they would be able to maneuver better through collapsed buildings and other cluttered spaces to find trapped victims.

Purdue University researchers have engineered flying robots that behave like hummingbirds, trained by machine learning algorithms based on various techniques the bird uses naturally every day.

This means that after learning from a simulation, the robot “knows” how to move around on its own like a hummingbird would, such as discerning when to perform an escape maneuver.

Artificial intelligence, combined with flexible flapping wings, also allows the robot to teach itself new tricks. Even though the robot can’t see yet, for example, it senses by touching surfaces. Each touch alters an electrical current, which the researchers realized they could track.

“The robot can essentially create a map without seeing its surroundings. This could be helpful in a situation when the robot might be searching for victims in a dark place — and it means one less sensor to add when we do give the robot the ability to see,” said Xinyan Deng, an associate professor of mechanical engineering at Purdue.

The researchers even have a video,

A May 9, 2019 Purdue University news release (also on EurekAlert), which originated the news item, provides more detail,


The researchers [presented] their work on May 20 at the 2019 IEEE International Conference on Robotics and Automation in Montreal. A YouTube video is available at https://www.youtube.com/watch?v=hl892dHqfA&feature=youtu.be. [it’s the video I’ve embedded in the above]

Drones can’t be made infinitely smaller, due to the way conventional aerodynamics work. They wouldn’t be able to generate enough lift to support their weight.

But hummingbirds don’t use conventional aerodynamics – and their wings are resilient. “The physics is simply different; the aerodynamics is inherently unsteady, with high angles of attack and high lift. This makes it possible for smaller, flying animals to exist, and also possible for us to scale down flapping wing robots,” Deng said.

Researchers have been trying for years to decode hummingbird flight so that robots can fly where larger aircraft can’t. In 2011, the company AeroVironment, commissioned by DARPA, an agency within the U.S. Department of Defense, built a robotic hummingbird that was heavier than a real one but not as fast, with helicopter-like flight controls and limited maneuverability. It required a human to be behind a remote control at all times.

Deng’s group and her collaborators studied hummingbirds themselves for multiple summers in Montana. They documented key hummingbird maneuvers, such as making a rapid 180-degree turn, and translated them to computer algorithms that the robot could learn from when hooked up to a simulation.

Further study on the physics of insects and hummingbirds allowed Purdue researchers to build robots smaller than hummingbirds – and even as small as insects – without compromising the way they fly. The smaller the size, the greater the wing flapping frequency, and the more efficiently they fly, Deng says.

The robots have 3D-printed bodies, wings made of carbon fiber and laser-cut membranes. The researchers have built one hummingbird robot weighing 12 grams – the weight of the average adult Magnificent Hummingbird – and another insect-sized robot weighing 1 gram. The hummingbird robot can lift more than its own weight, up to 27 grams.

Designing their robots with higher lift gives the researchers more wiggle room to eventually add a battery and sensing technology, such as a camera or GPS. Currently, the robot needs to be tethered to an energy source while it flies – but that won’t be for much longer, the researchers say.

The robots could fly silently just as a real hummingbird does, making them more ideal for covert operations. And they stay steady through turbulence, which the researchers demonstrated by testing the dynamically scaled wings in an oil tank.

The robot requires only two motors and can control each wing independently of the other, which is how flying animals perform highly agile maneuvers in nature.

“An actual hummingbird has multiple groups of muscles to do power and steering strokes, but a robot should be as light as possible, so that you have maximum performance on minimal weight,” Deng said.

Robotic hummingbirds wouldn’t only help with search-and-rescue missions, but also allow biologists to more reliably study hummingbirds in their natural environment through the senses of a realistic robot.

“We learned from biology to build the robot, and now biological discoveries can happen with extra help from robots,” Deng said.
Simulations of the technology are available open-source at https://github.com/
purdue-biorobotics/flappy
.

Early stages of the work, including the Montana hummingbird experiments in collaboration with Bret Tobalske’s group at the University of Montana, were financially supported by the National Science Foundation.

The researchers have three paper on arxiv.org for open access peer review,

Learning Extreme Hummingbird Maneuvers on Flapping Wing Robots
Fan Fei, Zhan Tu, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0962

Biological studies show that hummingbirds can perform extreme aerobatic maneuvers during fast escape. Given a sudden looming visual stimulus at hover, a hummingbird initiates a fast backward translation coupled with a 180-degree yaw turn, which is followed by instant posture stabilization in just under 10 wingbeats. Consider the wingbeat frequency of 40Hz, this aggressive maneuver is carried out in just 0.2 seconds. Inspired by the hummingbirds’ near-maximal performance during such extreme maneuvers, we developed a flight control strategy and experimentally demonstrated that such maneuverability can be achieved by an at-scale 12- gram hummingbird robot equipped with just two actuators. The proposed hybrid control policy combines model-based nonlinear control with model-free reinforcement learning. We use model-based nonlinear control for nominal flight control, as the dynamic model is relatively accurate for these conditions. However, during extreme maneuver, the modeling error becomes unmanageable. A model-free reinforcement learning policy trained in simulation was optimized to ‘destabilize’ the system and maximize the performance during maneuvering. The hybrid policy manifests a maneuver that is close to that observed in hummingbirds. Direct simulation-to-real transfer is achieved, demonstrating the hummingbird-like fast evasive maneuvers on the at-scale hummingbird robot.

Acting is Seeing: Navigating Tight Space Using Flapping Wings
Zhan Tu, Fan Fei, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0868

Wings of flying animals can not only generate lift and control torques but also can sense their surroundings. Such dual functions of sensing and actuation coupled in one element are particularly useful for small sized bio-inspired robotic flyers, whose weight, size, and power are under stringent constraint. In this work, we present the first flapping-wing robot using its flapping wings for environmental perception and navigation in tight space, without the need for any visual feedback. As the test platform, we introduce the Purdue Hummingbird, a flapping-wing robot with 17cm wingspan and 12 grams weight, with a pair of 30-40Hz flapping wings driven by only two actuators. By interpreting the wing loading feedback and its variations, the vehicle can detect the presence of environmental changes such as grounds, walls, stairs, obstacles and wind gust. The instantaneous wing loading can be obtained through the measurements and interpretation of the current feedback by the motors that actuate the wings. The effectiveness of the proposed approach is experimentally demonstrated on several challenging flight tasks without vision: terrain following, wall following and going through a narrow corridor. To ensure flight stability, a robust controller was designed for handling unforeseen disturbances during the flight. Sensing and navigating one’s environment through actuator loading is a promising method for mobile robots, and it can serve as an alternative or complementary method to visual perception.

Flappy Hummingbird: An Open Source Dynamic Simulation of Flapping Wing Robots and Animals
Fan Fei, Zhan Tu, Yilun Yang, Jian Zhang, and Xinyan Deng
Purdue University, West Lafayette, IN, USA
https://arxiv.org/abs/1902.0962

Insects and hummingbirds exhibit extraordinary flight capabilities and can simultaneously master seemingly conflicting goals: stable hovering and aggressive maneuvering, unmatched by small scale man-made vehicles. Flapping Wing Micro Air Vehicles (FWMAVs) hold great promise for closing this performance gap. However, design and control of such systems remain challenging due to various constraints. Here, we present an open source high fidelity dynamic simulation for FWMAVs to serve as a testbed for the design, optimization and flight control of FWMAVs. For simulation validation, we recreated the hummingbird-scale robot developed in our lab in the simulation. System identification was performed to obtain the model parameters. The force generation, open- loop and closed-loop dynamic response between simulated and experimental flights were compared and validated. The unsteady aerodynamics and the highly nonlinear flight dynamics present challenging control problems for conventional and learning control algorithms such as Reinforcement Learning. The interface of the simulation is fully compatible with OpenAI Gym environment. As a benchmark study, we present a linear controller for hovering stabilization and a Deep Reinforcement Learning control policy for goal-directed maneuvering. Finally, we demonstrate direct simulation-to-real transfer of both control policies onto the physical robot, further demonstrating the fidelity of the simulation.

Enjoy!

Electronics begone! Enter: the light-based brainlike computing chip

At this point, it’s possible I’m wrong but I think this is the first ‘memristor’ type device (also called a neuromorphic chip) based on light rather than electronics that I’ve featured here on this blog. In other words, it’s not, technically speaking, a memristor but it does have the same properties so it is a neuromorphic chip.

Caption: The optical microchips that the researchers are working on developing are about the size of a one-cent piece. Credit: WWU Muenster – Peter Leßmann

A May 8, 2019 news item on Nanowerk announces this new approach to neuromorphic hardware (Note: A link has been removed),

Researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain.

The scientists produced a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses. The network is able to “learn” information and use this as a basis for computing and recognizing patterns. As the system functions solely with light and not with electrons, it can process data many times faster than traditional systems. …

A May 8, 2019 University of Münster press release (also on EurekAlert), which originated the news item, reveals the full story,

A technology that functions like a brain? In these times of artificial intelligence, this no longer seems so far-fetched – for example, when a mobile phone can recognise faces or languages. With more complex applications, however, computers still quickly come up against their own limitations. One of the reasons for this is that a computer traditionally has separate memory and processor units – the consequence of which is that all data have to be sent back and forth between the two. In this respect, the human brain is way ahead of even the most modern computers because it processes and stores information in the same place – in the synapses, or connections between neurons, of which there are a million-billion in the brain. An international team of researchers from the Universities of Münster (Germany), Oxford and Exeter (both UK) have now succeeded in developing a piece of hardware which could pave the way for creating computers which resemble the human brain. The scientists managed to produce a chip containing a network of artificial neurons that works with light and can imitate the behaviour of neurons and their synapses.

The researchers were able to demonstrate, that such an optical neurosynaptic network is able to “learn” information and use this as a basis for computing and recognizing patterns – just as a brain can. As the system functions solely with light and not with traditional electrons, it can process data many times faster. “This integrated photonic system is an experimental milestone,” says Prof. Wolfram Pernice from Münster University and lead partner in the study. “The approach could be used later in many different fields for evaluating patterns in large quantities of data, for example in medical diagnoses.” The study is published in the latest issue of the “Nature” journal.

The story in detail – background and method used

Most of the existing approaches relating to so-called neuromorphic networks are based on electronics, whereas optical systems – in which photons, i.e. light particles, are used – are still in their infancy. The principle which the German and British scientists have now presented works as follows: optical waveguides that can transmit light and can be fabricated into optical microchips are integrated with so-called phase-change materials – which are already found today on storage media such as re-writable DVDs. These phase-change materials are characterised by the fact that they change their optical properties dramatically, depending on whether they are crystalline – when their atoms arrange themselves in a regular fashion – or amorphous – when their atoms organise themselves in an irregular fashion. This phase-change can be triggered by light if a laser heats the material up. “Because the material reacts so strongly, and changes its properties dramatically, it is highly suitable for imitating synapses and the transfer of impulses between two neurons,” says lead author Johannes Feldmann, who carried out many of the experiments as part of his PhD thesis at the Münster University.

In their study, the scientists succeeded for the first time in merging many nanostructured phase-change materials into one neurosynaptic network. The researchers developed a chip with four artificial neurons and a total of 60 synapses. The structure of the chip – consisting of different layers – was based on the so-called wavelength division multiplex technology, which is a process in which light is transmitted on different channels within the optical nanocircuit.

In order to test the extent to which the system is able to recognise patterns, the researchers “fed” it with information in the form of light pulses, using two different algorithms of machine learning. In this process, an artificial system “learns” from examples and can, ultimately, generalise them. In the case of the two algorithms used – both in so-called supervised and in unsupervised learning – the artificial network was ultimately able, on the basis of given light patterns, to recognise a pattern being sought – one of which was four consecutive letters.

“Our system has enabled us to take an important step towards creating computer hardware which behaves similarly to neurons and synapses in the brain and which is also able to work on real-world tasks,” says Wolfram Pernice. “By working with photons instead of electrons we can exploit to the full the known potential of optical technologies – not only in order to transfer data, as has been the case so far, but also in order to process and store them in one place,” adds co-author Prof. Harish Bhaskaran from the University of Oxford.

A very specific example is that with the aid of such hardware cancer cells could be identified automatically. Further work will need to be done, however, before such applications become reality. The researchers need to increase the number of artificial neurons and synapses and increase the depth of neural networks. This can be done, for example, with optical chips manufactured using silicon technology. “This step is to be taken in the EU joint project ‘Fun-COMP’ by using foundry processing for the production of nanochips,” says co-author and leader of the Fun-COMP project, Prof. C. David Wright from the University of Exeter.

Here’s a link to and a citation for the paper,

All-optical spiking neurosynaptic networks with self-learning capabilities by J. Feldmann, N. Youngblood, C. D. Wright, H. Bhaskaran & W. H. P. Pernice. Nature volume 569, pages208–214 (2019) DOI: https://doi.org/10.1038/s41586-019-1157-8 Issue Date: 09 May 2019

This paper is behind a paywall.

For the curious, I found a little more information about Fun-COMP (functionally-scaled computer technology). It’s a European Commission (EC) Horizon 2020 project coordinated through the University of Exeter. For information with details such as the total cost, contribution from the EC, the list of partnerships and more there is the Fun-COMP webpage on fabiodisconzi.com.

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

AI (artificial intelligence) artist got a show at a New York City art gallery

AI artists first hit my radar in August 2018 when Christie’s Auction House advertised an art auction of a ‘painting’ by an algorithm (artificial intelligence). There’s more in my August 31, 2018 posting but, briefly, a French art collective, Obvious, submitted a painting, “Portrait of Edmond de Belamy,” that was created by an artificial intelligence agent to be sold for an estimated to $7000 – $10,000. They weren’t even close. According to Ian Bogost’s March 6, 2019 article for The Atlantic, the painting sold for $432,500 In October 2018.

It has also, Bogost notes in his article, occasioned an art show (Note: Links have been removed),

… part of “Faceless Portraits Transcending Time,” an exhibition of prints recently shown [Februay 13 – March 5, 2019] at the HG Contemporary gallery in Chelsea, the epicenter of New York’s contemporary-art world. All of them were created by a computer.

The catalog calls the show a “collaboration between an artificial intelligence named AICAN and its creator, Dr. Ahmed Elgammal,” a move meant to spotlight, and anthropomorphize, the machine-learning algorithm that did most of the work. According to HG Contemporary, it’s the first solo gallery exhibit devoted to an AI artist.

If they hadn’t found each other in the New York art scene, the players involved could have met on a Spike Jonze film set: a computer scientist commanding five-figure print sales from software that generates inkjet-printed images; a former hotel-chain financial analyst turned Chelsea techno-gallerist with apparent ties to fine-arts nobility; a venture capitalist with two doctoral degrees in biomedical informatics; and an art consultant who put the whole thing together, A-Team–style, after a chance encounter at a blockchain conference. Together, they hope to reinvent visual art, or at least to cash in on machine-learning hype along the way.

The show in New York City, “Faceless Portraits …,” exhibited work by an artificially intelligent artist-agent (I’m creating a new term to suit my purposes) that’s different than the one used by Obvious to create “Portrait of Edmond de Belamy,” As noted earlier, it sold for a lot of money (Note: Links have been removed),

Bystanders in and out of the art world were shocked. The print had never been shown in galleries or exhibitions before coming to market at auction, a channel usually reserved for established work. The winning bid was made anonymously by telephone, raising some eyebrows; art auctions can invite price manipulation. It was created by a computer program that generates new images based on patterns in a body of existing work, whose features the AI “learns.” What’s more, the artists who trained and generated the work, the French collective Obvious, hadn’t even written the algorithm or the training set. They just downloaded them, made some tweaks, and sent the results to market.

“We are the people who decided to do this,” the Obvious member Pierre Fautrel said in response to the criticism, “who decided to print it on canvas, sign it as a mathematical formula, put it in a gold frame.” A century after Marcel Duchamp made a urinal into art [emphasis mine] by putting it in a gallery, not much has changed, with or without computers. As Andy Warhol famously said, “Art is what you can get away with.”

A bit of a segue here, there is a controversy as to whether or not that ‘urinal art’, also known as, The Fountain, should be attributed to Duchamp as noted in my January 23, 2019 posting titled ‘Baroness Elsa von Freytag-Loringhoven, Marcel Duchamp, and the Fountain’.

Getting back to the main action, Bogost goes on to describe the technologies underlying the two different AI artist-agents (Note: Links have been removed),

… Using a computer is hardly enough anymore; today’s machines offer all kinds of ways to generate images that can be output, framed, displayed, and sold—from digital photography to artificial intelligence. Recently, the fashionable choice has become generative adversarial networks, or GANs, the technology that created Portrait of Edmond de Belamy. Like other machine-learning methods, GANs use a sample set—in this case, art, or at least images of it—to deduce patterns, and then they use that knowledge to create new pieces. A typical Renaissance portrait, for example, might be composed as a bust or three-quarter view of a subject. The computer may have no idea what a bust is, but if it sees enough of them, it might learn the pattern and try to replicate it in an image.

GANs use two neural nets (a way of processing information modeled after the human brain) to produce images: a “generator” and a “discerner.” The generator produces new outputs—images, in the case of visual art—and the discerner tests them against the training set to make sure they comply with whatever patterns the computer has gleaned from that data. The quality or usefulness of the results depends largely on having a well-trained system, which is difficult.

That’s why folks in the know were upset by the Edmond de Belamy auction. The image was created by an algorithm the artists didn’t write, trained on an “Old Masters” image set they also didn’t create. The art world is no stranger to trend and bluster driving attention, but the brave new world of AI painting appeared to be just more found art, the machine-learning equivalent of a urinal on a plinth.

Ahmed Elgammal thinks AI art can be much more than that. A Rutgers University professor of computer science, Elgammal runs an art-and-artificial-intelligence lab, where he and his colleagues develop technologies that try to understand and generate new “art” (the scare quotes are Elgammal’s) with AI—not just credible copies of existing work, like GANs do. “That’s not art, that’s just repainting,” Elgammal says of GAN-made images. “It’s what a bad artist would do.”

Elgammal calls his approach a “creative adversarial network,” or CAN. It swaps a GAN’s discerner—the part that ensures similarity—for one that introduces novelty instead. The system amounts to a theory of how art evolves: through small alterations to a known style that produce a new one. That’s a convenient take, given that any machine-learning technique has to base its work on a specific training set.

The results are striking and strange, although calling them a new artistic style might be a stretch. They’re more like credible takes on visual abstraction. The images in the show, which were produced based on training sets of Renaissance portraits and skulls, are more figurative, and fairly disturbing. Their gallery placards name them dukes, earls, queens, and the like, although they depict no actual people—instead, human-like figures, their features smeared and contorted yet still legible as portraiture. Faceless Portrait of a Merchant, for example, depicts a torso that might also read as the front legs and rear haunches of a hound. Atop it, a fleshy orb comes across as a head. The whole scene is rippled by the machine-learning algorithm, in the way of so many computer-generated artworks.

Faceless Portrait of a Merchant, one of the AI portraits produced by Ahmed Elgammal and AICAN. (Artrendex Inc.) [downloaded from https://www.theatlantic.com/technology/archive/2019/03/ai-created-art-invades-chelsea-gallery-scene/584134/]

Bogost consults an expert on portraiture for a discussion about the particularities of portraiture and the shortcomings one might expect of an AI artist-agent (Note: A link has been removed),

“You can’t really pick a form of painting that’s more charged with cultural meaning than portraiture,” John Sharp, an art historian trained in 15th-century Italian painting and the director of the M.F.A. program in design and technology at Parsons School of Design, told me. The portrait isn’t just a style, it’s also a host for symbolism. “For example, men might be shown with an open book to show how they are in dialogue with that material; or a writing implement, to suggest authority; or a weapon, to evince power.” Take Portrait of a Youth Holding an Arrow, an early-16th-century Boltraffio portrait that helped train the AICAN database for the show. The painting depicts a young man, believed to be the Bolognese poet Girolamo Casio, holding an arrow at an angle in his fingers and across his chest. It doubles as both weapon and quill, a potent symbol of poetry and aristocracy alike. Along with the arrow, the laurels in Casio’s hair are emblems of Apollo, the god of both poetry and archery.

A neural net couldn’t infer anything about the particular symbolic trappings of the Renaissance or antiquity—unless it was taught to, and that wouldn’t happen just by showing it lots of portraits. For Sharp and other critics of computer-generated art, the result betrays an unforgivable ignorance about the supposed influence of the source material.

But for the purposes of the show, the appeal to the Renaissance might be mostly a foil, a way to yoke a hip, new technology to traditional painting in order to imbue it with the gravity of history: not only a Chelsea gallery show, but also an homage to the portraiture found at the Met. To reinforce a connection to the cradle of European art, some of the images are presented in elaborate frames, a decision the gallerist, Philippe Hoerle-Guggenheim (yes, that Guggenheim; he says the relation is “distant”) [the Guggenheim is strongly associated with the visual arts by way the two Guggeheim museums, one in New York City and the other in Bilbao, Portugal], told me he insisted upon. Meanwhile, the technical method makes its way onto the gallery placards in an official-sounding way—“Creative Adversarial Network print.” But both sets of inspirations, machine-learning and Renaissance portraiture, get limited billing and zero explanation at the show. That was deliberate, Hoerle-Guggenheim said. He’s betting that the simple existence of a visually arresting AI painting will be enough to draw interest—and buyers. It would turn out to be a good bet.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

This is a fascinating article and I have one last excerpt, which poses this question, is an AI artist-agent a collaborator or a medium? There ‘s also speculation about how AI artist-agents might impact the business of art (Note: Links have been removed),

… it’s odd to list AICAN as a collaborator—painters credit pigment as a medium, not as a partner. Even the most committed digital artists don’t present the tools of their own inventions that way; when they do, it’s only after years, or even decades, of ongoing use and refinement.

But Elgammal insists that the move is justified because the machine produces unexpected results. “A camera is a tool—a mechanical device—but it’s not creative,” he said. “Using a tool is an unfair term for AICAN. It’s the first time in history that a tool has had some kind of creativity, that it can surprise you.” Casey Reas, a digital artist who co-designed the popular visual-arts-oriented coding platform Processing, which he uses to create some of his fine art, isn’t convinced. “The artist should claim responsibility over the work rather than to cede that agency to the tool or the system they create,” he told me.

Elgammal’s financial interest in AICAN might explain his insistence on foregrounding its role. Unlike a specialized print-making technique or even the Processing coding environment, AICAN isn’t just a device that Elgammal created. It’s also a commercial enterprise.

Elgammal has already spun off a company, Artrendex, that provides “artificial-intelligence innovations for the art market.” One of them offers provenance authentication for artworks; another can suggest works a viewer or collector might appreciate based on an existing collection; another, a system for cataloging images by visual properties and not just by metadata, has been licensed by the Barnes Foundation to drive its collection-browsing website.

The company’s plans are more ambitious than recommendations and fancy online catalogs. When presenting on a panel about the uses of blockchain for managing art sales and provenance, Elgammal caught the attention of Jessica Davidson, an art consultant who advises artists and galleries in building collections and exhibits. Davidson had been looking for business-development partnerships, and she became intrigued by AICAN as a marketable product. “I was interested in how we can harness it in a compelling way,” she says.

The art market is just that: a market. Some of the most renowned names in art today, from Damien Hirst to Banksy, trade in the trade of art as much as—and perhaps even more than—in the production of images, objects, and aesthetics. No artist today can avoid entering that fray, Elgammal included. “Is he an artist?” Hoerle-Guggenheim asked himself of the computer scientist. “Now that he’s in this context, he must be.” But is that enough? In Sharp’s estimation, “Faceless Portraits Transcending Time” is a tech demo more than a deliberate oeuvre, even compared to the machine-learning-driven work of his design-and-technology M.F.A. students, who self-identify as artists first.

Judged as Banksy or Hirst might be, Elgammal’s most art-worthy work might be the Artrendex start-up itself, not the pigment-print portraits that its technology has output. Elgammal doesn’t treat his commercial venture like a secret, but he also doesn’t surface it as a beneficiary of his supposedly earnest solo gallery show. He’s argued that AI-made images constitute a kind of conceptual art, but conceptualists tend to privilege process over product or to make the process as visible as the product.

Hoerle-Guggenheim worked as a financial analyst[emphasis mine] for Hyatt before getting into the art business via some kind of consulting deal (he responded cryptically when I pressed him for details). …

If you have the time, I recommend reading Bogost’s March 6, 2019 article for The Atlantic in its entirety/ these excerpts don’t do it enough justice.

Portraiture: what does it mean these days?

After reading the article I have a few questions. What exactly do Bogost and the arty types in the article mean by the word ‘portrait’? “Portrait of Edmond de Belamy” is an image of someone who doesn’t and never has existed and the exhibit “Faceless Portraits Transcending Time,” features images that don’t bear much or, in some cases, any resemblance to human beings. Maybe this is considered a dull question by people in the know but I’m an outsider and I found the paradox: portraits of nonexistent people or nonpeople kind of interesting.

BTW, I double-checked my assumption about portraits and found this definition in the Portrait Wikipedia entry (Note: Links have been removed),

A portrait is a painting, photograph, sculpture, or other artistic representation of a person [emphasis mine], in which the face and its expression is predominant. The intent is to display the likeness, personality, and even the mood of the person. For this reason, in photography a portrait is generally not a snapshot, but a composed image of a person in a still position. A portrait often shows a person looking directly at the painter or photographer, in order to most successfully engage the subject with the viewer.

So, portraits that aren’t portraits give rise to some philosophical questions but Bogost either didn’t want to jump into that rabbit hole (segue into yet another topic) or, as I hinted earlier, may have assumed his audience had previous experience of those kinds of discussions.

Vancouver (Canada) and a ‘portraiture’ exhibit at the Rennie Museum

By one of life’s coincidences, Vancouver’s Rennie Museum had an exhibit (February 16 – June 15, 2019) that illuminates questions about art collecting and portraiture, From a February 7, 2019 Rennie Museum news release,

‘downloaded from https://renniemuseum.org/press-release-spring-2019-collected-works/] Courtesy: Rennie Museum

February 7, 2019

Press Release | Spring 2019: Collected Works
By rennie museum

rennie museum is pleased to present Spring 2019: Collected Works, a group exhibition encompassing the mediums of photography, painting and film. A portraiture of the collecting spirit [emphasis mine], the works exhibited invite exploration of what collected objects, and both the considered and unintentional ways they are displayed, inform us. Featuring the works of four artists—Andrew Grassie, William E. Jones, Louise Lawler and Catherine Opie—the exhibition runs from February 16 to June 15, 2019.

Four exquisite paintings by Scottish painter Andrew Grassie detailing the home and private storage space of a major art collector provide a peek at how the passionately devoted integrates and accommodates the physical embodiments of such commitment into daily life. Grassie’s carefully constructed, hyper-realistic images also pose the question, “What happens to art once it’s sold?” In the transition from pristine gallery setting to idiosyncratic private space, how does the new context infuse our reading of the art and how does the art shift our perception of the individual?

Furthering the inquiry into the symbiotic exchange between possessor and possession, a selection of images by American photographer Louise Lawler depicting art installed in various private and public settings question how the bilateral relationship permeates our interpretation when the collector and the collected are no longer immediately connected. What does de-acquisitioning an object inform us and how does provenance affect our consideration of the art?

The question of legacy became an unexpected facet of 700 Nimes Road (2010-2011), American photographer Catherine Opie’s portrait of legendary actress Elizabeth Taylor. Opie did not directly photograph Taylor for any of the fifty images in the expansive portfolio. Instead, she focused on Taylor’s home and the objects within, inviting viewers to see—then see beyond—the façade of fame and consider how both treasures and trinkets act as vignettes to the stories of a life. Glamorous images of jewels and trophies juxtapose with mundane shots of a printer and the remote-control user manual. Groupings of major artworks on the wall are as illuminating of the home’s mistress as clusters of personal photos. Taylor passed away part way through Opie’s project. The subsequent photos include Taylor’s mementos heading off to auction, raising the question, “Once the collections that help to define someone are disbursed, will our image of that person lose focus?”

In a similar fashion, the twenty-two photographs in Villa Iolas (1982/2017), by American artist and filmmaker William E. Jones, depict the Athens home of iconic art dealer and collector Alexander Iolas. Taken in 1982 by Jones during his first travels abroad, the photographs of art, furniture and antiquities tell a story of privilege that contrast sharply with the images Jones captures on a return visit in 2016. Nearly three decades after Iolas’s 1989 death, his home sits in dilapidation, looted and vandalized. Iolas played an extraordinary role in the evolution of modern art, building the careers of Max Ernst, Yves Klein and Giorgio de Chirico. He gave Andy Warhol his first solo exhibition and was a key advisor to famed collectors John and Dominique de Menil. Yet in the years since his death, his intention of turning his home into a modern art museum as a gift to Greece, along with his reputation, crumbled into ruins. The photographs taken by Jones during his visits in two different eras are incorporated into the film Fall into Ruin (2017), along with shots of contemporary Athens and antiquities on display at the National Archaeological Museum.

“I ask a lot of questions about how portraiture functionswhat is there to describe the person or time we live in or a certain set of politics…”
 – Catherine Opie, The Guardian, Feb 9, 2016

We tend to think of the act of collecting as a formal activity yet it can happen casually on a daily basis, often in trivial ways. While we readily acknowledge a collector consciously assembling with deliberate thought, we give lesser consideration to the arbitrary accumulations that each of us accrue. Be it master artworks, incidental baubles or random curios, the objects we acquire and surround ourselves with tell stories of who we are.

Andrew Grassie (Scotland, b. 1966) is a painter known for his small scale, hyper-realist works. He has been the subject of solo exhibitions at the Tate Britain; Talbot Rice Gallery, Edinburgh; institut supérieur des arts de Toulouse; and rennie museum, Vancouver, Canada. He lives and works in London, England.

William E. Jones (USA, b. 1962) is an artist, experimental film-essayist and writer. Jones’s work has been the subject of retrospectives at Tate Modern, London; Anthology Film Archives, New York; Austrian Film Museum, Vienna; and, Oberhausen Short Film Festival. He is a recipient of the John Simon Guggenheim Memorial Fellowship and the Creative Capital/Andy Warhol Foundation Arts Writers Grant. He lives and works in Los Angeles, USA.

Louise Lawler (USA, b. 1947) is a photographer and one of the foremost members of the Pictures Generation. Lawler was the subject of a major retrospective at the Museum of Modern Art, New York in 2017. She has held exhibitions at the Whitney Museum of American Art, New York; Stedelijk Museum, Amsterdam; National Museum of Art, Oslo; and Musée d’Art Moderne de La Ville de Paris. She lives and works in New York.

Catherine Opie (USA, b. 1961) is a photographer and educator. Her work has been exhibited at Wexner Center for the Arts, Ohio; Henie Onstad Art Center, Oslo; Los the Angeles County Museum of Art; Portland Art Museum; and the Guggenheim Museum, New York. She is the recipient of United States Artist Fellowship, Julius Shulman’s Excellence in Photography Award, and the Smithsonian’s Archive of American Art Medal.  She lives and works in Los Angeles.

rennie museum opened in October 2009 in historic Wing Sang, the oldest structure in Vancouver’s Chinatown, to feature dynamic exhibitions comprising only of art drawn from rennie collection. Showcasing works by emerging and established international artists, the exhibits, accompanied by supporting catalogues, are open free to the public through engaging guided tours. The museum’s commitment to providing access to arts and culture is also expressed through its education program, which offers free age-appropriate tours and customized workshops to children of all ages.

rennie collection is a globally recognized collection of contemporary art that focuses on works that tackle issues related to identity, social commentary and injustice, appropriation, and the nature of painting, photography, sculpture and film. Currently the collection includes works by over 370 emerging and established artists, with over fifty collected in depth. The Vancouver based collection engages actively with numerous museums globally through a robust, artist-centric, lending policy.

So despite the Wikipedia definition, it seems that portraits don’t always feature people. While Bogost didn’t jump into that particular rabbit hole, he did touch on the business side of art.

What about intellectual property?

Bogost doesn’t explicitly discuss this particular issue. It’s a big topic so I’m touching on it only lightly, if an artist worsk with an AI, the question as to ownership of the artwork could prove thorny. Is the copyright owner the computer scientist or the artist or both? Or does the AI artist-agent itself own the copyright? That last question may not be all that farfetched. Sophia, a social humanoid robot, has occasioned thought about ‘personhood.’ (Note: The robots mentioned in this posting have artificial intelligence.) From the Sophia (robot) Wikipedia entry (Note: Links have been removed),

Sophia has been interviewed in the same manner as a human, striking up conversations with hosts. Some replies have been nonsensical, while others have impressed interviewers such as 60 Minutes’ Charlie Rose.[12] In a piece for CNBC, when the interviewer expressed concerns about robot behavior, Sophia joked that he had “been reading too much Elon Musk. And watching too many Hollywood movies”.[27] Musk tweeted that Sophia should watch The Godfather and asked “what’s the worst that could happen?”[28][29] Business Insider’s chief UK editor Jim Edwards interviewed Sophia, and while the answers were “not altogether terrible”, he predicted it was a step towards “conversational artificial intelligence”.[30] At the 2018 Consumer Electronics Show, a BBC News reporter described talking with Sophia as “a slightly awkward experience”.[31]

On October 11, 2017, Sophia was introduced to the United Nations with a brief conversation with the United Nations Deputy Secretary-General, Amina J. Mohammed.[32] On October 25, at the Future Investment Summit in Riyadh, the robot was granted Saudi Arabian citizenship [emphasis mine], becoming the first robot ever to have a nationality.[29][33] This attracted controversy as some commentators wondered if this implied that Sophia could vote or marry, or whether a deliberate system shutdown could be considered murder. Social media users used Sophia’s citizenship to criticize Saudi Arabia’s human rights record. In December 2017, Sophia’s creator David Hanson said in an interview that Sophia would use her citizenship to advocate for women’s rights in her new country of citizenship; Newsweek criticized that “What [Hanson] means, exactly, is unclear”.[34] On November 27, 2018 Sophia was given a visa by Azerbaijan while attending Global Influencer Day Congress held in Baku. December 15, 2018 Sophia was appointed a Belt and Road Innovative Technology Ambassador by China'[35]

As for an AI artist-agent’s intellectual property rights , I have a July 10, 2017 posting featuring that question in more detail. Whether you read that piece or not, it seems obvious that artists might hesitate to call an AI agent, a partner rather than a medium of expression. After all, a partner (and/or the computer scientist who developed the programme) might expect to share in property rights and profits but paint, marble, plastic, and other media used by artists don’t have those expectations.

Moving slightly off topic , in my July 10, 2017 posting I mentioned a competition (literary and performing arts rather than visual arts) called, ‘Dartmouth College and its Neukom Institute Prizes in Computational Arts’. It was started in 2016 and, as of 2018, was still operational under this name: Creative Turing Tests. Assuming there’ll be contests for prizes in 2019, there’s (from the contest site) [1] PoetiX, competition in computer-generated sonnet writing; [2] Musical Style, composition algorithms in various styles, and human-machine improvisation …; and [3] DigiLit, algorithms able to produce “human-level” short story writing that is indistinguishable from an “average” human effort. You can find the contest site here.

An artificial synapse tuned by light, a ferromagnetic memristor, and a transparent, flexible artificial synapse

Down the memristor rabbit hole one more time.* I started out with news about two new papers and inadvertently found two more. In a bid to keep this posting to a manageable size, I’m stopping at four.

UK

In a June 19, 2019 Nanowerk Spotlight article, Dr. Neil Kemp discusses memristors and some of his latest work (Note: A link has been removed),

Memristor (or memory resistors) devices are non-volatile electronic memory devices that were first theorized by Leon Chua in the 1970’s. However, it was some thirty years later that the first practical device was fabricated. This was in 2008 when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behaviour.

The high interest in memristor devices also stems from the fact that these devices emulate the memory and learning properties of biological synapses. i.e. the electrical resistance value of the device is dependent on the history of the current flowing through it.

There is a huge effort underway to use memristor devices in neuromorphic computing applications and it is now reasonable to imagine the development of a new generation of artificial intelligent devices with very low power consumption (non-volatile), ultra-fast performance and high-density integration.

These discoveries come at an important juncture in microelectronics, since there is increasing disparity between computational needs of Big Data, Artificial Intelligence (A.I.) and the Internet of Things (IoT), and the capabilities of existing computers. The increases in speed, efficiency and performance of computer technology cannot continue in the same manner as it has done since the 1960s.

To date, most memristor research has focussed on the electronic switching properties of the device. However, for many applications it is useful to have an additional handle (or degree of freedom) on the device to control its resistive state. For example memory and processing in the brain also involves numerous chemical and bio-chemical reactions that control the brain structure and its evolution through development.

To emulate this in a simple solid-state system composed of just switches alone is not possible. In our research, we are interested in using light to mediate this essential control.

We have demonstrated that light can be used to make short and long-term memory and we have shown how light can modulate a special type of learning, called spike timing dependent plasticity (STDP). STDP involves two neuronal spikes incident across a synapse at the same time. Depending on the relative timing of the spikes and their overlap across the synaptic cleft, the connection strength is other strengthened or weakened.

In our earlier work, we were only able to achieve to small switching effects in memristors using light. In our latest work (Advanced Electronic Materials, “Percolation Threshold Enables Optical Resistive-Memory Switching and Light-Tuneable Synaptic Learning in Segregated Nanocomposites”), we take advantage of a percolating-like nanoparticle morphology to vastly increase the magnitude of the switching between electronic resistance states when light is incident on the device.

We have used an inhomogeneous percolating network consisting of metallic nanoparticles distributed in filamentary-like conduction paths. Electronic conduction and the resistance of the device is very sensitive to any disruption of the conduction path(s).

By embedding the nanoparticles in a polymer that can expand or contract with light the conduction pathways are broken or re-connected causing very large changes in the electrical resistance and memristance of the device.

Our devices could lead to the development of new memristor-based artificial intelligence systems that are adaptive and reconfigurable using a combination of optical and electronic signalling. Furthermore, they have the potential for the development of very fast optical cameras for artificial intelligence recognition systems.

Our work provides a nice proof-of-concept but the materials used means the optical switching is slow. The materials are also not well suited to industry fabrication. In our on-going work we are addressing these switching speed issues whilst also focussing on industry compatible materials.

Currently we are working on a new type of optical memristor device that should give us orders of magnitude improvement in the optical switching speeds whilst also retaining a large difference between the resistance on and off states. We hope to be able to achieve nanosecond switching speeds. The materials used are also compatible with industry standard methods of fabrication.

The new devices should also have applications in optical communications, interfacing and photonic computing. We are currently looking for commercial investors to help fund the research on these devices so that we can bring the device specifications to a level of commercial interest.

If you’re interested in memristors, Kemp’s article is well written and quite informative for nonexperts, assuming of course you can tolerate not understanding everything perfectly.

Here are links and citations for two papers. The first is the latest referred to in the article, a May 2019 paper and the second is a paper appearing in July 2019.

Percolation Threshold Enables Optical Resistive‐Memory Switching and Light‐Tuneable Synaptic Learning in Segregated Nanocomposites by Ayoub H. Jaafar, Mary O’Neill, Stephen M. Kelly, Emanuele Verrelli, Neil T. Kemp. Advanced Electronic Materials DOI: https://doi.org/10.1002/aelm.201900197 First published: 28 May 2019

Wavelength dependent light tunable resistive switching graphene oxide nonvolatile memory devices by Ayoub H.Jaafar, N.T.Kemp. DOI: https://doi.org/10.1016/j.carbon.2019.07.007 Carbon Available online 3 July 2019

The first paper (May 2019) is definitely behind a paywall and the second paper (July 2019) appears to be behind a paywall.

Dr. Kemp’s work has been featured here previously in a January 3, 2018 posting in the subsection titled, Shining a light on the memristor.

China

This work from China was announced in a June 20, 2019 news item on Nanowerk,

Memristors, demonstrated by solid-state devices with continuously tunable resistance, have emerged as a new paradigm for self-adaptive networks that require synapse-like functions. Spin-based memristors offer advantages over other types of memristors because of their significant endurance and high energy effciency.

However, it remains a challenge to build dense and functional spintronic memristors with structures and materials that are compatible with existing ferromagnetic devices. Ta/CoFeB/MgO heterostructures are commonly used in interfacial PMA-based [perpendicular magnetic anisotropy] magnetic tunnel junctions, which exhibit large tunnel magnetoresistance and are implemented in commercial MRAM [magnetic random access memory] products.

“To achieve the memristive function, DW is driven back and forth in a continuous manner in the CoFeB layer by applying in-plane positive or negative current pulses along the Ta layer, utilizing SOT that the current exerts on the CoFeB magnetization,” said Shuai Zhang, a coauthor in the paper. “Slowly propagating domain wall generates a creep in the detection area of the device, which yields a broad range of intermediate resistive states in the AHE [anomalous Hall effect] measurements. Consequently, AHE resistance is modulated in an analog manner, being controlled by the pulsed current characteristics including amplitude, duration, and repetition number.”

“For a follow-up study, we are working on more neuromorphic operations, such as spike-timing-dependent plasticity and paired pulsed facilitation,” concludes You. …

Here’s are links to and citations for the paper (Note: It’s a little confusing but I believe that one of the links will take you to the online version, as for the ‘open access’ link, keep reading),

A Spin–Orbit‐Torque Memristive Device by Shuai Zhang, Shijiang Luo, Nuo Xu, Qiming Zou, Min Song, Jijun Yun, Qiang Luo, Zhe Guo, Ruofan Li, Weicheng Tian, Xin Li, Hengan Zhou, Huiming Chen, Yue Zhang, Xiaofei Yang, Wanjun Jiang, Ka Shen, Jeongmin Hong, Zhe Yuan, Li Xi, Ke Xia, Sayeef Salahuddin, Bernard Dieny, Long You. Advanced Electronic Materials Volume 5, Issue 4 April 2019 (print version) 1800782 DOI: https://doi.org/10.1002/aelm.201800782 First published [online]: 30 January 2019 Note: there is another DOI, https://doi.org/10.1002/aelm.201970022 where you can have open access to Memristors: A Spin–Orbit‐Torque Memristive Device (Adv. Electron. Mater. 4/2019)

The paper published online in January 2019 is behind a paywall and the paper (almost the same title) published in April 2019 has a new DOI and is open access. Final note: I tried accessing the ‘free’ paper and opened up a free file for the artwork featuring the work from China on the back cover of the April 2019 of Advanced Electronic Materials.

Korea

Usually when I see the words transparency and flexibility, I expect to see graphene is one of the materials. That’s not the case for this paper (link to and citation for),

Transparent and flexible photonic artificial synapse with piezo-phototronic modulator: Versatile memory capability and higher order learning algorithm by Mohit Kumar, Joondong Kim, Ching-Ping Wong. Nano Energy Volume 63, September 2019, 103843 DOI: https://doi.org/10.1016/j.nanoen.2019.06.039 Available online 22 June 2019

Here’s the abstract for the paper where you’ll see that the material is made up of zinc oxide silver nanowires,

An artificial photonic synapse having tunable manifold synaptic response can be an essential step forward for the advancement of novel neuromorphic computing. In this work, we reported the development of highly transparent and flexible two-terminal ZnO/Ag-nanowires/PET photonic artificial synapse [emphasis mine]. The device shows purely photo-triggered all essential synaptic functions such as transition from short-to long-term plasticity, paired-pulse facilitation, and spike-timing-dependent plasticity, including in the versatile memory capability. Importantly, strain-induced piezo-phototronic effect within ZnO provides an additional degree of regulation to modulate all of the synaptic functions in multi-levels. The observed effect is quantitatively explained as a dynamic of photo-induced electron-hole trapping/detraining via the defect states such as oxygen vacancies. We revealed that the synaptic functions can be consolidated and converted by applied strain, which is not previously applied any of the reported synaptic devices. This study will open a new avenue to the scientific community to control and design highly transparent wearable neuromorphic computing.

This paper is behind a paywall.

Gene editing and personalized medicine: Canada

Back in the fall of 2018 I came across one of those overexcited pieces about personalized medicine and gene editing tha are out there. This one came from an unexpected source, an author who is a “PhD Scientist in Medical Science (Blood and Vasculature” (from Rick Gierczak’s LinkedIn profile).

It starts our promisingly enough although I’m beginning to dread the use of the word ‘precise’  where medicine is concerned, (from a September 17, 2018 posting on the Science Borealis blog by Rick Gierczak (Note: Links have been removed),

CRISPR-Cas9 technology was accidentally discovered in the 1980s when scientists were researching how bacteria defend themselves against viral infection. While studying bacterial DNA called clustered regularly interspaced short palindromic repeats (CRISPR), they identified additional CRISPR-associated (Cas) protein molecules. Together, CRISPR and one of those protein molecules, termed Cas9, can locate and cut precise regions of bacterial DNA. By 2012, researchers understood that the technology could be modified and used more generally to edit the DNA of any plant or animal. In 2015, the American Association for the Advancement of Science chose CRISPR-Cas9 as science’s “Breakthrough of the Year”.

Today, CRISPR-Cas9 is a powerful and precise gene-editing tool [emphasis mine] made of two molecules: a protein that cuts DNA (Cas9) and a custom-made length of RNA that works like a GPS for locating the exact spot that needs to be edited (CRISPR). Once inside the target cell nucleus, these two molecules begin editing the DNA. After the desired changes are made, they use a repair mechanism to stitch the new DNA into place. Cas9 never changes, but the CRISPR molecule must be tailored for each new target — a relatively easy process in the lab. However, it’s not perfect, and occasionally the wrong DNA is altered [emphasis mine].

Note that Gierczak makes a point of mentioning that CRISPR/Cas9 is “not perfect.” And then, he gets excited (Note: Links have been removed),

CRISPR-Cas9 has the potential to treat serious human diseases, many of which are caused by a single “letter” mutation in the genetic code (A, C, T, or G) that could be corrected by precise editing. [emphasis mine] Some companies are taking notice of the technology. A case in point is CRISPR Therapeutics, which recently developed a treatment for sickle cell disease, a blood disorder that causes a decrease in oxygen transport in the body. The therapy targets a special gene called fetal hemoglobin that’s switched off a few months after birth. Treatment involves removing stem cells from the patient’s bone marrow and editing the gene to turn it back on using CRISPR-Cas9. These new stem cells are returned to the patient ready to produce normal red blood cells. In this case, the risk of error is eliminated because the new cells are screened for the correct edit before use.

The breakthroughs shown by companies like CRISPR Therapeutics are evidence that personalized medicine has arrived. [emphasis mine] However, these discoveries will require government regulatory approval from the countries where the treatment is going to be used. In the US, the Food and Drug Administration (FDA) has developed new regulations allowing somatic (i.e., non-germ) cell editing and clinical trials to proceed. [emphasis mine]

The potential treatment for sickle cell disease is exciting but Gierczak offers no evidence that this treatment or any unnamed others constitute proof that “personalized medicine has arrived.” In fact, Goldman Sachs, a US-based investment bank, makes the case that it never will .

Cost/benefit analysis

Edward Abrahams, president of the Personalized Medicine Coalition (US-based), advocates for personalized medicine while noting in passing, market forces as represented by Goldman Sachs in his May 23, 2018 piece for statnews.com (Note: A link has been removed),

One of every four new drugs approved by the Food and Drug Administration over the last four years was designed to become a personalized (or “targeted”) therapy that zeros in on the subset of patients likely to respond positively to it. That’s a sea change from the way drugs were developed and marketed 10 years ago.

Some of these new treatments have extraordinarily high list prices. But focusing solely on the cost of these therapies rather than on the value they provide threatens the future of personalized medicine.

… most policymakers are not asking the right questions about the benefits of these treatments for patients and society. Influenced by cost concerns, they assume that prices for personalized tests and treatments cannot be justified even if they make the health system more efficient and effective by delivering superior, longer-lasting clinical outcomes and increasing the percentage of patients who benefit from prescribed treatments.

Goldman Sachs, for example, issued a report titled “The Genome Revolution.” It argues that while “genome medicine” offers “tremendous value for patients and society,” curing patients may not be “a sustainable business model.” [emphasis mine] The analysis underlines that the health system is not set up to reap the benefits of new scientific discoveries and technologies. Just as we are on the precipice of an era in which gene therapies, gene-editing, and immunotherapies promise to address the root causes of disease, Goldman Sachs says that these therapies have a “very different outlook with regard to recurring revenue versus chronic therapies.”

Let’s just chew on this one (contemplate)  for a minute”curing patients may not be ‘sustainable business model’!”

Coming down to earth: policy

While I find Gierczak to be over-enthused, he, like Abrahams, emphasizes the importance of new policy, in his case, the focus is Canadian policy. From Gierczak’s September 17, 2018 posting (Note: Links have been removed),

In Canada, companies need approval from Health Canada. But a 2004 law called the Assisted Human Reproduction Act (AHR Act) states that it’s a criminal offence “to alter the genome of a human cell, or in vitroembryo, that is capable of being transmitted to descendants”. The Actis so broadly written that Canadian scientists are prohibited from using the CRISPR-Cas9 technology on even somatic cells. Today, Canada is one of the few countries in the world where treating a disease with CRISPR-Cas9 is a crime.

On the other hand, some countries provide little regulatory oversight for editing either germ or somatic cells. In China, a company often only needs to satisfy the requirements of the local hospital where the treatment is being performed. And, if germ-cell editing goes wrong, there is little recourse for the future generations affected.

The AHR Act was introduced to regulate the use of reproductive technologies like in vitrofertilization and research related to cloning human embryos during the 1980s and 1990s. Today, we live in a time when medical science, and its role in Canadian society, is rapidly changing. CRISPR-Cas9 is a powerful tool, and there are aspects of the technology that aren’t well understood and could potentially put patients at risk if we move ahead too quickly. But the potential benefits are significant. Updated legislation that acknowledges both the risks and current realities of genomic engineering [emphasis mine] would relieve the current obstacles and support a path toward the introduction of safe new therapies.

Criminal ban on human gene-editing of inheritable cells (in Canada)

I had no idea there was a criminal ban on the practice until reading this January 2017 editorial by Bartha Maria Knoppers, Rosario Isasi, Timothy Caulfield, Erika Kleiderman, Patrick Bedford, Judy Illes, Ubaka Ogbogu, Vardit Ravitsky, & Michael Rudnicki for (Nature) npj Regenerative Medicine (Note: Links have been removed),

Driven by the rapid evolution of gene editing technologies, international policy is examining which regulatory models can address the ensuing scientific, socio-ethical and legal challenges for regenerative and personalised medicine.1 Emerging gene editing technologies, including the CRISPR/Cas9 2015 scientific breakthrough,2 are powerful, relatively inexpensive, accurate, and broadly accessible research tools.3 Moreover, they are being utilised throughout the world in a wide range of research initiatives with a clear eye on potential clinical applications. Considering the implications of human gene editing for selection, modification and enhancement, it is time to re-examine policy in Canada relevant to these important advances in the history of medicine and science, and the legislative and regulatory frameworks that govern them. Given the potential human reproductive applications of these technologies, careful consideration of these possibilities, as well as ethical and regulatory scrutiny must be a priority.4

With the advent of human embryonic stem cell research in 1978, the birth of Dolly (the cloned sheep) in 1996 and the Raelian cloning hoax in 2003, the environment surrounding the enactment of Canada’s 2004 Assisted Human Reproduction Act (AHRA) was the result of a decade of polarised debate,5 fuelled by dystopian and utopian visions for future applications. Rightly or not, this led to the AHRA prohibition on a wide range of activities, including the creation of embryos (s. 5(1)(b)) or chimeras (s. 5(1)(i)) for research and in vitro and in vivo germ line alterations (s. 5(1)(f)). Sanctions range from a fine (up to $500,000) to imprisonment (up to 10 years) (s. 60 AHRA).

In Canada, the criminal ban on gene editing appears clear, the Act states that “No person shall knowingly […] alter the genome of a cell of a human being or in vitro embryo such that the alteration is capable of being transmitted to descendants;” [emphases mine] (s. 5(1)(f) AHRA). This approach is not shared worldwide as other countries such as the United Kingdom, take a more regulatory approach to gene editing research.1 Indeed, as noted by the Law Reform Commission of Canada in 1982, criminal law should be ‘an instrument of last resort’ used solely for “conduct which is culpable, seriously harmful, and generally conceived of as deserving of punishment”.6 A criminal ban is a suboptimal policy tool for science as it is inflexible, stifles public debate, and hinders responsiveness to the evolving nature of science and societal attitudes.7 In contrast, a moratorium such as the self-imposed research moratorium on human germ line editing called for by scientists in December 20158 can at least allow for a time limited pause. But like bans, they may offer the illusion of finality and safety while halting research required to move forward and validate innovation.

On October 1st, 2016, Health Canada issued a Notice of Intent to develop regulations under the AHRA but this effort is limited to safety and payment issues (i.e. gamete donation). Today, there is a need for Canada to revisit the laws and policies that address the ethical, legal and social implications of human gene editing. The goal of such a critical move in Canada’s scientific and legal history would be a discussion of the right of Canadians to benefit from the advancement of science and its applications as promulgated in article 27 of the Universal Declaration of Human Rights9 and article 15(b) of the International Covenant on Economic, Social and Cultural Rights,10 which Canada has signed and ratified. Such an approach would further ensure the freedom of scientific endeavour both as a principle of a liberal democracy and as a social good, while allowing Canada to be engaged with the international scientific community.

Even though it’s a bit old, I still recommend reading the open access editorial in full, if you have the time.

One last thing abut the paper, the acknowledgements,

Sponsored by Canada’s Stem Cell Network, the Centre of Genomics and Policy of McGill University convened a ‘think tank’ on the future of human gene editing in Canada with legal and ethics experts as well as representatives and observers from government in Ottawa (August 31, 2016). The experts were Patrick Bedford, Janetta Bijl, Timothy Caulfield, Judy Illes, Rosario Isasi, Jonathan Kimmelman, Erika Kleiderman, Bartha Maria Knoppers, Eric Meslin, Cate Murray, Ubaka Ogbogu, Vardit Ravitsky, Michael Rudnicki, Stephen Strauss, Philip Welford, and Susan Zimmerman. The observers were Geneviève Dubois-Flynn, Danika Goosney, Peter Monette, Kyle Norrie, and Anthony Ridgway.

Competing interests

The authors declare no competing interests.

Both McGill and the Stem Cell Network pop up again. A November 8, 2017 article about the need for new Canadian gene-editing policies by Tom Blackwell for the National Post features some familiar names (Did someone have a budget for public relations and promotion?),

It’s one of the most exciting, and controversial, areas of health science today: new technology that can alter the genetic content of cells, potentially preventing inherited disease — or creating genetically enhanced humans.

But Canada is among the few countries in the world where working with the CRISPR gene-editing system on cells whose DNA can be passed down to future generations is a criminal offence, with penalties of up to 10 years in jail.

This week, one major science group announced it wants that changed, calling on the federal government to lift the prohibition and allow researchers to alter the genome of inheritable “germ” cells and embryos.

The potential of the technology is huge and the theoretical risks like eugenics or cloning are overplayed, argued a panel of the Stem Cell Network.

The step would be a “game-changer,” said Bartha Knoppers, a health-policy expert at McGill University, in a presentation to the annual Till & McCulloch Meetings of stem-cell and regenerative-medicine researchers [These meetings were originally known as the Stem Cell Network’s Annual General Meeting {AGM}]. [emphases mine]

“I’m completely against any modification of the human genome,” said the unidentified meeting attendee. “If you open this door, you won’t ever be able to close it again.”

If the ban is kept in place, however, Canadian scientists will fall further behind colleagues in other countries, say the experts behind the statement say; they argue possible abuses can be prevented with good ethical oversight.

“It’s a human-reproduction law, it was never meant to ban and slow down and restrict research,” said Vardit Ravitsky, a University of Montreal bioethicist who was part of the panel. “It’s a sort of historical accident … and now our hands are tied.”

There are fears, as well, that CRISPR could be used to create improved humans who are genetically programmed to have certain facial or other features, or that the editing could have harmful side effects. Regardless, none of it is happening in Canada, good or bad.

In fact, the Stem Cell Network panel is arguably skirting around the most contentious applications of the technology. It says it is asking the government merely to legalize research for its own sake on embryos and germ cells — those in eggs and sperm — not genetic editing of embryos used to actually get women pregnant.

The highlighted portions in the last two paragraphs of the excerpt were written one year prior to the claims by a Chinese scientist that he had run a clinical trial resulting in gene-edited twins, Lulu and Nana. (See my my November 28, 2018 posting for a comprehensive overview of the original furor). I have yet to publish a followup posting featuring the news that the CRISPR twins may have been ‘improved’ more extensively than originally realized. The initial reports about the twins focused on an illness-related reason (making them HIV ‘immune’) but made no mention of enhanced cognitive skills a side effect of eliminating the gene that would make them HIV ‘immune’. To date, the researcher has not made the bulk of his data available for an in-depth analysis to support his claim that he successfully gene-edited the twins. As well, there were apparently seven other pregnancies coming to term as part of the researcher’s clinical trial and there has been no news about those births.

Risk analysis innovation

Before moving onto the innovation of risk analysis, I want to focus a little more on at least one of the risks that gene-editing might present. Gierczak noted that CRISPR/Cas9 is “not perfect,” which acknowledges the truth but doesn’t convey all that much information.

While the terms ‘precision’ and ‘scissors’ are used frequently when describing the CRISPR technique, scientists actually mean that the technique is significantly ‘more precise’ than other techniques but they are not referencing an engineering level of precision. As for the ‘scissors’, it’s an analogy scientists like to use but in fact CRISPR is not as efficient and precise as a pair of scissors.

Michael Le Page in a July 16, 2018 article for New Scientist lays out some of the issues (Note: A link has been removed),

A study of CRIPSR suggests we shouldn’t rush into trying out CRISPR genome editing inside people’s bodies just yet. The technique can cause big deletions or rearrangements of DNA [emphasis mine], says Allan Bradley of the Wellcome Sanger Institute in the UK, meaning some therapies based on CRISPR may not be quite as safe as we thought.

The CRISPR genome editing technique is revolutionising biology, enabling us to create new varieties of plants and animals and develop treatments for a wide range of diseases.

The CRISPR Cas9 protein works by cutting the DNA of a cell in a specific place. When the cell repairs the damage, a few DNA letters get changed at this spot – an effect that can be exploited to disable genes.

At least, that’s how it is supposed to work. But in studies of mice and human cells, Bradley’s team has found that in around a fifth of cells, CRISPR causes deletions or rearrangements more than 100 DNA letters long. These surprising changes are sometimes thousands of letters long.

“I do believe the findings are robust,” says Gaetan Burgio of the Australian National University, an expert on CRISPR who has debunked previous studies questioning the method’s safety. “This is a well-performed study and fairly significant.”

I covered the Bradley paper and the concerns in a July 17, 2018 posting ‘The CRISPR ((clustered regularly interspaced short palindromic repeats)-CAS9 gene-editing technique may cause new genetic damage kerfuffle‘. (The ‘kerfufle’ was in reference to a report that the CRISPR market was affected by the publication of Bradley’s paper.)

Despite Health Canada not moving swiftly enough for some researchers, they have nonetheless managed to release an ‘outcome’ report about a consultation/analysis started in October 2016. Before getting to the consultation’s outcome, it’s interesting to look at how the consultation’s call for response was described (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

In October 2016, recognizing the need to strengthen the regulatory framework governing assisted human reproduction in Canada, Health Canada announced its intention to bring into force the dormant sections of the Assisted Human Reproduction Act  and to develop the necessary supporting regulations.

This consultation document provides an overview of the key policy proposals that will help inform the development of regulations to support bringing into force Section 10, Section 12 and Sections 45-58 of the Act. Specifically, the policy proposals describe the Department’s position on the following:

Section 10: Safety of Donor Sperm and Ova

  • Scope and application
  • Regulated parties and their regulatory obligations
  • Processing requirements, including donor suitability assessment
  • Record-keeping and traceability

Section 12: Reimbursement

  • Expenditures that may be reimbursed
  • Process for reimbursement
  • Creation and maintenance of records

Sections 45-58: Administration and Enforcement

  • Scope of the administration and enforcement framework
  • Role of inspectors designated under the Act

The purpose of the document is to provide Canadians with an opportunity to review the policy proposals and to provide feedback [emphasis mine] prior to the Department finalizing policy decisions and developing the regulations. In addition to requesting stakeholders’ general feedback on the policy proposals, the Department is also seeking input on specific questions, which are included throughout the document.

It took me a while to find the relevant section (in particular, take note of ‘Federal Regulatory Oversight’),

3.2. AHR in Canada Today

Today, an increasing number of Canadians are turning to AHR technologies to grow or build their families. A 2012 Canadian studyFootnote 1 found that infertility is on the rise in Canada, with roughly 16% of heterosexual couples experiencing infertility. In addition to rising infertility, the trend of delaying marriage and parenthood, scientific advances in cryopreserving ova, and the increasing use of AHR by LGBTQ2 couples and single parents to build a family are all contributing to an increase in the use of AHR technologies.

The growing use of reproductive technologies by Canadians to help build their families underscores the need to strengthen the AHR Act. While the approach to regulating AHR varies from country to country, Health Canada has considered international best practices and the need for regulatory alignment when developing the proposed policies set out in this document. …

3.2.1 Federal Regulatory Oversight

Although the scope of the AHR Act was significantly reduced in 2012 and some of the remaining sections have not yet been brought into force, there are many important sections of the Act that are currently administered and enforced by Health Canada, as summarized generally below:

Section 5: Prohibited Scientific and Research Procedures
Section 5 prohibits certain types of scientific research and clinical procedures that are deemed unacceptable, including: human cloning, the creation of an embryo for non-reproductive purposes, maintaining an embryo outside the human body beyond the fourteenth day, sex selection for non-medical reasons, altering the genome in a way that could be transmitted to descendants, and creating a chimera or a hybrid. [emphasis mine]

….

It almost seems as if the they were hiding the section that broached the human gene-editing question. It doesn’t seem to have worked as it appears, there are some very motivated parties determined to reframe the discussion. Health Canada’s ‘outocme’ report, published March 2019, What we heard: A summary of scanning and consultations on what’s next for health product regulation reflects the success of those efforts,

1.0 Introduction and Context

Scientific and technological advances are accelerating the pace of innovation. These advances are increasingly leading to the development of health products that are better able to predict, define, treat, and even cure human diseases. Globally, many factors are driving regulators to think about how to enable health innovation. To this end, Health Canada has been expanding beyond existing partnerships and engaging both domestically and internationally. This expanding landscape of products and services comes with a range of new challenges and opportunities.

In keeping up to date with emerging technologies and working collaboratively through strategic partnerships, Health Canada seeks to position itself as a regulator at the forefront of health innovation. Following the targeted sectoral review of the Health and Biosciences Sector Regulatory Review consultation by the Treasury Board Secretariat, Health Canada held a number of targeted meetings with a broad range of stakeholders.

This report outlines the methodologies used to look ahead at the emerging health technology environment, [emphasis mine] the potential areas of focus that resulted, and the key findings from consultations.

… the Department identified the following key drivers that are expected to shape the future of health innovation:

  1. The use of “big data” to inform decision-making: Health systems are generating more data, and becoming reliant on this data. The increasing accuracy, types, and volume of data available in real time enable automation and machine learning that can forecast activity, behaviour, or trends to support decision-making.
  2. Greater demand for citizen agency: Canadians increasingly want and have access to more information, resources, options, and platforms to manage their own health (e.g., mobile apps, direct-to-consumer services, decentralization of care).
  3. Increased precision and personalization in health care delivery: Diagnostic tools and therapies are increasingly able to target individual patients with customized therapies (e.g., individual gene therapy).
  4. Increased product complexity: Increasingly complex products do not fit well within conventional product classifications and standards (e.g., 3D printing).
  5. Evolving methods for production and distribution: In some cases, manufacturers and supply chains are becoming more distributed, challenging the current framework governing production and distribution of health products.
  6. The ways in which evidence is collected and used are changing: The processes around new drug innovation, research and development, and designing clinical trials are evolving in ways that are more flexible and adaptive.

With these key drivers in mind, the Department selected the following six emerging technologies for further investigation to better understand how the health product space is evolving:

  1. Artificial intelligence, including activities such as machine learning, neural networks, natural language processing, and robotics.
  2. Advanced cell therapies, such as individualized cell therapies tailor-made to address specific patient needs.
  3. Big data, from sources such as sensors, genetic information, and social media that are increasingly used to inform patient and health care practitioner decisions.
  4. 3D printing of health products (e.g., implants, prosthetics, cells, tissues).
  5. New ways of delivering drugs that bring together different product lines and methods (e.g., nano-carriers, implantable devices).
  6. Gene editing, including individualized gene therapies that can assist in preventing and treating certain diseases.

Next, to test the drivers identified and further investigate emerging technologies, the Department consulted key organizations and thought leaders across the country with expertise in health innovation. To this end, Health Canada held seven workshops with over 140 representatives from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinicians in Ottawa, Toronto, Montreal, and Vancouver. [emphases mine]

The ‘outocme’ report, ‘What we heard …’, is well worth reading in its entirety; it’s about 9 pp.

I have one comment, ‘stakeholders’ don’t seem to include anyone who isn’t “from industry associations, small-to-medium sized enterprises and start-ups, larger multinational companies, investors, researchers, and clinician” or from “Ottawa, Toronto, Montreal, and Vancouver.” Aren’t the rest of us stakeholders?

Innovating risk analysis

This line in the report caught my eye (from Health Canada’s Toward a strengthened Assisted Human Reproduction Act ; A Consultation with Canadians on Key Policy Proposals webpage),

There is increasing need to enable innovation in a flexible, risk-based way, with appropriate oversight to ensure safety, quality, and efficacy. [emphases mine]

It reminded me of the 2019 federal budget (from my March 22, 2019 posting). One comment before proceeding, regulation and risk are tightly linked and, so, by innovating regulation they are by exttension alos innovating risk analysis,

… Budget 2019 introduces the first three “Regulatory Roadmaps” to specifically address stakeholder issues and irritants in these sectors, informed by over 140 responses [emphasis mine] from businesses and Canadians across the country, as well as recommendations from the Economic Strategy Tables.

Introducing Regulatory Roadmaps

These Roadmaps lay out the Government’s plans to modernize regulatory frameworks, without compromising our strong health, safety, and environmental protections. They contain proposals for legislative and regulatory amendments as well as novel regulatory approaches to accommodate emerging technologies, including the use of regulatory sandboxes and pilot projects—better aligning our regulatory frameworks with industry realities.

Budget 2019 proposes the necessary funding and legislative revisions so that regulatory departments and agencies can move forward on the Roadmaps, including providing the Canadian Food Inspection Agency, Health Canada and Transport Canada with up to $219.1 million over five years, starting in 2019–20, (with $0.5 million in remaining amortization), and $3.1 million per year on an ongoing basis.

In the coming weeks, the Government will be releasing the full Regulatory Roadmaps for each of the reviews, as well as timelines for enacting specific initiatives, which can be grouped in the following three main areas:

What Is a Regulatory Sandbox? Regulatory sandboxes are controlled “safe spaces” in which innovative products, services, business models and delivery mechanisms can be tested without immediately being subject to all of the regulatory requirements.
– European Banking Authority, 2017

Establishing a regulatory sandbox for new and innovative medical products
The regulatory approval system has not kept up with new medical technologies and processes. Health Canada proposes to modernize regulations to put in place a regulatory sandbox for new and innovative products, such as tissues developed through 3D printing, artificial intelligence, and gene therapies targeted to specific individuals. [emphasis mine]

Modernizing the regulation of clinical trials
Industry and academics have expressed concerns that regulations related to clinical trials are overly prescriptive and inconsistent. Health Canada proposes to implement a risk-based approach [emphasis mine] to clinical trials to reduce costs to industry and academics by removing unnecessary requirements for low-risk drugs and trials. The regulations will also provide the agri-food industry with the ability to carry out clinical trials within Canada on products such as food for special dietary use and novel foods.

Does the government always get 140 responses from a consultation process? Moving on, I agree with finding new approaches to regulatory processes and oversight and, by extension, new approaches to risk analysis.

Earlier in this post, I asked if someone had a budget for public relations/promotion. I wasn’t joking. My March 22, 2019 posting also included these line items in the proposed 2019 budget,

Budget 2019 proposes to make additional investments in support of the following organizations:
Stem Cell Network: Stem cell research—pioneered by two Canadians in the 1960s [James Till and Ernest McCulloch]—holds great promise for new therapies and medical treatments for respiratory and heart diseases, spinal cord injury, cancer, and many other diseases and disorders. The Stem Cell Network is a national not-for-profit organization that helps translate stem cell research into clinical applications and commercial products. To support this important work and foster Canada’s leadership in stem cell research, Budget 2019 proposes to provide the Stem Cell Network with renewed funding of $18 million over three years, starting in 2019–20.

Genome Canada: The insights derived from genomics—the study of the entire genetic information of living things encoded in their DNA and related molecules and proteins—hold the potential for breakthroughs that can improve the lives of Canadians and drive innovation and economic growth. Genome Canada is a not-for-profit organization dedicated to advancing genomics science and technology in order to create economic and social benefits for Canadians. To support Genome Canada’s operations, Budget 2019 proposes to provide Genome Canada with $100.5 million over five years, starting in 2020–21. This investment will also enable Genome Canada to launch new large-scale research competitions and projects, in collaboration with external partners, ensuring that Canada’s research community continues to have access to the resources needed to make transformative scientific breakthroughs and translate these discoveries into real-world applications.

Years ago, I managed to find a webpage with all of the proposals various organizations were submitting to a government budget committee. It was eye-opening. You can tell which organizations were able to hire someone who knew the current government buzzwords and the things that a government bureaucrat would want to hear and the organizations that didn’t.

Of course, if the government of the day is adamantly against or uninterested, no amount of persusasion will work to get your organization more money in the budget.

Finally

Reluctantly, I am inclined to explore the topic of emerging technologies such as gene-editing not only in the field of agriculture (for gene-editing of plants, fish, and animals see my November 28, 2018 posting) but also with humans. At the very least, it needs to be discussed whether we choose to participate or not.

If you are interested in the arguments against changing Canada’s prohibition against gene-editing of humans, there’s an Ocotber 2, 2017 posting on Impact Ethics by Françoise Baylis, Professor and Canada Research Chair in Bioethics and Philosophy at Dalhousie University, and Alana Cattapan, Johnson Shoyama Graduate School of Public Policy at the University of Saskatchewan, which makes some compelling arguments. Of course, it was written before the CRISPR twins (my November 28, 2018 posting).

Recaliing CRISPR Therapeutics (mentioned by Gierczak), the company received permission to run clinical trials in the US in October 2018 after the FDA (US Food and Drug Administration) lifted an earlier ban on their trials according to an Oct. 10, 2018 article by Frank Vinhuan for exome,

The partners also noted that their therapy is making progress outside of the U.S. They announced that they have received regulatory clearance in “multiple countries” to begin tests of the experimental treatment in both sickle cell disease and beta thalassemia, …

It seems to me that the quotes around “multiple countries” are meant to suggest doubt of some kind. Generally speaking, company representatives make those kinds of generalizations when they’re trying to pump up their copy. E.g., 50% increase in attendance  but no whole numbers to tell you what that means. It could mean two people attended the first year and then brought a friend the next year or 100 people attended and the next year there were 150.

Despite attempts to declare personalized medicine as having arrived, I think everything is still in flux with no preordained outcome. The future has yet to be determined but it will be and I , for one, would like to have some say in the matter.

Summer (2019) Institute on AI (artificial intelligence) Societal Impacts, Governance, and Ethics. Summer Institute In Alberta, Canada

The deadline for applications is April 7, 2019. As for whether or not you might like to attend, here’s more from a joint March 11, 2019 Alberta Machine Intelligence Institute (Amii)/
Canadian Institute for Advanced Research (CIFAR)/University of California at Los Angeles (UCLA) Law School news release
(also on globalnewswire.com),

What will Artificial Intelligence (AI) mean for society? That’s the question scholars from a variety of disciplines will explore during the inaugural Summer Institute on AI Societal Impacts, Governance, and Ethics. Summer Institute, co-hosted by the Alberta Machine Intelligence Institute (Amii) and CIFAR, with support from UCLA School of Law, takes place July 22-24, 2019 in Edmonton, Canada.

“Recent advances in AI have brought a surge of attention to the field – both excitement and concern,” says co-organizer and UCLA professor, Edward Parson. “From algorithmic bias to autonomous vehicles, personal privacy to automation replacing jobs. Summer Institute will bring together exceptional people to talk about how humanity can receive the benefits and not get the worst harms from these rapid changes.”

Summer Institute brings together experts, grad students and researchers from multiple backgrounds to explore the societal, governmental, and ethical implications of AI. A combination of lectures, panels, and participatory problem-solving, this comprehensive interdisciplinary event aims to build understanding and action around these high-stakes issues.

“Machine intelligence is opening transformative opportunities across the world,” says John Shillington, CEO of Amii, “and Amii is excited to bring together our own world-leading researchers with experts from areas such as law, philosophy and ethics for this important discussion. Interdisciplinary perspectives will be essential to the ongoing development of machine intelligence and for ensuring these opportunities have the broadest reach possible.”

Over the three-day program, 30 graduate-level students and early-career researchers will engage with leading experts and researchers including event co-organizers: Western University’s Daniel Lizotte, Amii’s Alona Fyshe and UCLA’s Edward Parson. Participants will also have a chance to shape the curriculum throughout this uniquely interactive event.

Summer Institute takes place prior to Deep Learning and Reinforcement Learning Summer School, and includes a combined event on July 24th [2019] for both Summer Institute and Summer School participants.

Visit dlrlsummerschool.ca/the-summer-institute to apply; applications close April 7, 2019.

View our Summer Institute Biographies & Boilerplates for more information on confirmed faculty members and co-hosting organizations. Follow the conversation through social media channels using the hashtag #SI2019.

Media Contact: Spencer Murray, Director of Communications & Public Relations, Amii
t: 587.415.6100 | c: 780.991.7136 | e: spencer.murray@amii.ca

There’s a bit more information on The Summer Institute on AI and Society webpage (on the Deep Learning and Reinforcement Learning Summer School 2019 website) such as this more complete list of speakers,

Confirmed speakers at Summer Institute include:

Alona Fyshe, University of Alberta/Amii (SI co-organizer)
Edward Parson, UCLA (SI co-organizer)
Daniel Lizotte, Western University (SI co-organizer)
Geoffrey Rockwell, University of Alberta
Graham Taylor, University of Guelph/Vector Institute
Rob Lempert, Rand Corporation
Gary Marchant, Arizona State University
Richard Re, UCLA
Evan Selinger, Rochester Institute of Technology
Elana Zeide, UCLA

Two questions, why are all the summer school faculty either Canada- or US-based? What about South American, Asian, Middle Eastern, etc. thinkers?

One last thought, I wonder if this ‘AI & ethics summer institute’ has anything to do with the Pan-Canadian Artificial Intelligence Strategy, which CIFAR administers and where both the University of Alberta and Vector Institute are members.

Media registration for United Nations 3rd AI (artificial intelligence) for Good Global Summit

This is strictly for folks who have media accreditation. First, the news about the summit and then some detail about how you might accreditation should you be interested in going to Switzerland. Warning: The International Telecommunications Union which is holding this summit is a United Nations agency and you will note almost an entire paragraph of ‘alphabet soup’ when all the ‘sister’ agencies involved are listed.

From the March 21, 2019 International Telecommunications Union (ITU) media advisory (Note: There have been some changes to the formatting),

Geneva, 21 March 2019
​​​​​​​​​​​​​
Artificial Intelligence (AI) h​as taken giant leaps forward in recent years, inspiring growing confidence in AI’s ability to assist in solving some of humanity’s greatest challenges. Leaders in AI and humanitarian action are convening on the neutral platform offered by the United Nations to work towards AI improving the quality and sustainability of life on our planet.
The 2017 summit marked the beginning of global dialogue on the potential of AI to act as a force for good. The action-oriented 2018 summit gave rise to numerous ‘AI for Good’ projects, including an ‘AI for Health’ Focus Group, now led by ITU and the World Health Organization (WHO). The 2019 summit will continue to connect AI innovators with public and private-sector decision-makers, building collaboration to maximize the impact of ‘AI for Good’.

Organized by the International Telecommunication Union (IT​U) – the United Nations specialized agency for information and communication technology (ICT) – in partnership with the XPRIZE Foundation, the Association for Computing Machinery (ACM) and close to 30 sister United Nations agencies, the 3rd annual ​AI for Good Global Summit in Geneva, 28-31 May, is the leading United Nations platform for inclusive dialogue on AI. The goal of the summit is to identify practical applications of AI to accelerate progress towards the United Nations Sustainable Development Goals​​.​

►►► MEDIA REGISTRATION IS NOW OPEN ◄◄◄

Media are recommended to register in advance to receive key announcements in the run-up to the summit.

WHAT: The summit attracts a cross-section of AI experts from industry and academia, global business leaders, Heads of UN agencies, ICT ministers, non-governmental organizations, and civil society.

The summit is designed to generate ‘AI for Good’ projects able to be enacted in the near term, guided by the summit’s multi-stakeholder and inter-disciplinary audience. It also formulates supporting strategies to ensure trusted, safe and inclusive development of AI technologies and equitable access to their benefits.

The 2019 summit will highlight AI’s value in advancing education, healthcare and wellbeing, social and economic equality, space research, and smart and safe mobility. It will propose actions to assist high-potential AI solutions in achieving global scale. It will host debate around unintended consequences of AI as well as AI’s relationship with art and culture. A ‘learning day’ will offer potential AI adopters an audience with leading AI experts and educators.

A dynamic show floor will demonstrate innovations at the cutting edge of AI research and development, such as the IBM Watson live debater; the Fusion collaborative exoskeleton; RoboRace, the world’s first self-driving electric racing car; avatar prototypes, and the ElliQ social robot for the care of the elderly. Summit attendees can also look forward to AI-inspired performances from world-renowned musician Jojo Mayer and award-winning vocal and visual artist​ Reeps One

WHEN: 28-31 May 2019
WHERE: International Conference Centre Geneva, 17 Rue de Varembé, Geneva, Switzerland

WHO: Over 100 speakers have been confirmed to date, including:

Jim Hagemann Snabe – Chairman, Siemens​​
Cédric Villani – AI advisor to the President of France, and Mathematics Fields Medal Winner
Jean-Philippe Courtois – President of Global Operations, Microsoft
Anousheh Ansari – CEO, XPRIZE Foundation, Space Ambassador
Yves Daccord – Director General, International Committee of the Red Cross
Yan Huang – Director AI Innovation, Baidu
Timnit Gebru – Head of AI Ethics, Google
Vladimir Kramnik – World Chess Champion
Vicki Hanson – CEO, ACM
Zoubin Ghahramani – Chief Scientist, Uber, and Professor of Engineering, University of Cambridge
Lucas di Grassi – Formula E World Racing Champion, CEO of Roborac

Confirmed speakers also include C-level and expert representatives of Bosch, Botnar Foundation, Byton, Cambridge Quantum Computing, the cities of Montreal and Pittsburg, Darktrace, Deloitte, EPFL, European Space Agency, Factmata, Google, IBM, IEEE, IFIP, Intel, IPSoft, Iridescent, MasterCard, Mechanica.ai, Minecraft, NASA, Nethope, NVIDIA, Ocean Protocol, Open AI, Philips, PWC, Stanford University, University of Geneva, and WWF.

Please visit the summit programme for more information on the latest speakers, breakthrough sessions and panels.

The summit is organized in partnership with the following sister United Nations agencies:CTBTO, ICAO, ILO, IOM, UNAIDS, UNCTAD, UNDESA, UNDPA, UNEP, UNESCO, UNFPA, UNGP, UNHCR, UNICEF, UNICRI, UNIDIR, UNIDO, UNISDR, UNITAR, UNODA, UNODC, UNOOSA, UNOPS, UNU, WBG,  WFP, WHO, and WIPO.

The 2019 summit is kindly supported by Platinum Sponsor and Strategic Partner, Microsoft; Gold Sponsors, ACM, the Kay Family Foundation, Mind.ai and the Autonomous Driver Alliance; Silver Sponsors, Deloitte and the Zero Abuse Project; and Bronze Sponsor, Live Tiles.​

More information available at aiforgood.itu.int
​Join the conversat​ion on social media ​using the hashtag #AIforGood

As promised here are the media accreditation details from the ITU Media Registration and Accreditation webpage,

To gain media access, ITU must confirm your status as a bona fide member of the media. Therefore, please read ITU’s Media Accreditation Guidelines below so you are aware of the information you will be required to submit for ITU to confirm such status. ​
Media accreditation is not granted to 1) non-editorial staff working for a publishing house (e.g. management, marketing, advertising executives, etc.); 2) researchers, academics, authors or editors of directories; 3) employees of information outlets of public, non-governmental or private entities that are not first and foremost media organizations; 4) members of professional broadcasting or media associations, 5) press or communication professionals accompanying member state delegations; and 6) citizen journalists under no apparent editorial board oversight. If you have questions about your eligibility, please email us at pressreg@itu.int.​

Applications for accreditation are considered on a case-by-case basis and ITU reserves the right to request additional proof or documentation other than what is listed below. ​​​Media accreditation decisions rest with ITU and all decisions are final.

​Accreditation eligibility & credentials 
​1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int along with the required supporting credentials, based on the type of media organization you work for:

​​​​​Print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;
o please submit 2 copies or links to recent byline articles published within the last 4 months.

News wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;
o please submit 2 copies or links to recent byline articles or broadcasting material published within the last 4 months.

Broadcast media should provide news and information programmes to the general public. Inde​pendent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;
o please submit broadcasting material published within the last 4 months.

Freelance journalists and photographers must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter and at the discretion of the ITU Corporate Communication Division.
o if possible, please submit a valid assignment letter from the news organization or publication.

2. Bloggers and community media may be granted accreditation if the content produced is deemed relevant to the industry, contains news commentary, is regularly updated and/or made publicly available. Corporate bloggers may register as normal participants (not media). Please see Guidelines for Bloggers and Community Media Accreditation below for more details:

Special guidelines for bloggers and community ​media accreditation

ITU is committed to working with independent and ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs, community or online radio, limited print formats which generally carry paid advertising ​​and other online media. These are some of the guidelines we use to determine whether to accredit bloggers and community media representatives:

​​ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. If your media outlet is new, you must have an established record of having written extensively on ICT issues and must present copies or links to two recently published videos, podcasts or articles with your byline.​

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg​@itu.int.

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn.

UN-accre​dited media

Media already accredited and badged by the United Nations are automatically accredited and registered by ITU. In this case, you only need to send a copy of your UN badge to pressreg@itu.int​to make sure you receive your event badge. Anyone joining an ITU event MUST have an event badge in order to access the premises. ​Please make sure you let us know in advance that you are planning to attend so your event badge is ready for printing and pick-up.​

You can register and get accreditation here (scroll past the guidelines). Good luck!