Tag Archives: IBM

Ghosts, mechanical turks, and pseudo-AI (artificial intelligence)—Is it all a con game?

There’s been more than one artificial intelligence (AI) story featured here on this blog but the ones featured in this posting are the first I’ve stumbled across that suggest the hype is even more exaggerated than even the most cynical might have thought. (BTW, the 2019 material is later as I have taken a chronological approach to this posting.)

It seems a lot of companies touting their AI algorithms and capabilities are relying on human beings to do the work, from a July 6, 2018 article by Olivia Solon for the Guardian (Note: A link has been removed),

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. …

The Turk

Fooling people with machines that seem intelligent is not new according to a Sept. 10, 2018 article by Seth Stevenson for Slate.com (Note: Links have been removed),

It’s 1783, and Paris is gripped by the prospect of a chess match. One of the contestants is François-André Philidor, who is considered the greatest chess player in Paris, and possibly the world. Everyone is so excited because Philidor is about to go head-to-head with the other biggest sensation in the chess world at the time.

But his opponent isn’t a man. And it’s not a woman, either. It’s a machine.

This story may sound a lot like Garry Kasparov taking on Deep Blue, IBM’s chess-playing supercomputer. But that was only a couple of decades ago, and this chess match in Paris happened more than 200 years ago. It doesn’t seem like a robot that can play chess would even be possible in the 1780s. This machine playing against Philidor was making an incredible technological leap—playing chess, and not only that, but beating humans at chess.

In the end, it didn’t quite beat Philidor, but the chess master called it one of his toughest matches ever. It was so hard for Philidor to get a read on his opponent, which was a carved wooden figure—slightly larger than life—wearing elaborate garments and offering a cold, mean stare.

It seems like the minds of the era would have been completely blown by a robot that could nearly beat a human chess champion. Some people back then worried that it was black magic, but many folks took the development in stride. …

Debates about the hottest topic in technology today—artificial intelligence—didn’t starts in the 1940s, with people like Alan Turing and the first computers. It turns out that the arguments about AI go back much further than you might imagine. The story of the 18th-century chess machine turns out to be one of those curious tales from history that can help us understand technology today, and where it might go tomorrow.

[In future episodes our podcast, Secret History of the Future] we’re going to look at the first cyberattack, which happened in the 1830s, and find out how the Victorians invented virtual reality.

Philidor’s opponent was known as The Turk or Mechanical Turk and that ‘machine’ was in fact a masterful hoax as The Turk held a hidden compartment from which a human being directed his moves.

People pretending to be AI agents

It seems that today’s AI has something in common with the 18th century Mechanical Turk, there are often humans lurking in the background making things work. From a Sept. 4, 2018 article by Janelle Shane for Slate.com (Note: Links have been removed),

Every day, people are paid to pretend to be bots.

In a strange twist on “robots are coming for my job,” some tech companies that boast about their artificial intelligence have found that at small scales, humans are a cheaper, easier, and more competent alternative to building an A.I. that can do the task.

Sometimes there is no A.I. at all. The “A.I.” is a mockup powered entirely by humans, in a “fake it till you make it” approach used to gauge investor interest or customer behavior. Other times, a real A.I. is combined with human employees ready to step in if the bot shows signs of struggling. These approaches are called “pseudo-A.I.” or sometimes, more optimistically, “hybrid A.I.”

Although some companies see the use of humans for “A.I.” tasks as a temporary bridge, others are embracing pseudo-A.I. as a customer service strategy that combines A.I. scalability with human competence. They’re advertising these as “hybrid A.I.” chatbots, and if they work as planned, you will never know if you were talking to a computer or a human. Every remote interaction could turn into a form of the Turing test. So how can you tell if you’re dealing with a bot pretending to be a human or a human pretending to be a bot?

One of the ways you can’t tell anymore is by looking for human imperfections like grammar mistakes or hesitations. In the past, chatbots had prewritten bits of dialogue that they could mix and match according to built-in rules. Bot speech was synonymous with precise formality. In early Turing tests, spelling mistakes were often a giveaway that the hidden speaker was a human. Today, however, many chatbots are powered by machine learning. Instead of using a programmer’s rules, these algorithms learn by example. And many training data sets come from services like Amazon’s Mechanical Turk, which lets programmers hire humans from around the world to generate examples of tasks like asking and answering questions. These data sets are usually full of casual speech, regionalisms, or other irregularities, so that’s what the algorithms learn. It’s not uncommon these days to get algorithmically generated image captions that read like text messages. And sometimes programmers deliberately add these things in, since most people don’t expect imperfections of an algorithm. In May, Google’s A.I. assistant made headlines for its ability to convincingly imitate the “ums” and “uhs” of a human speaker.

Limited computing power is the main reason that bots are usually good at just one thing at a time. Whenever programmers try to train machine learning algorithms to handle additional tasks, they usually get algorithms that can do many tasks rather badly. In other words, today’s algorithms are artificial narrow intelligence, or A.N.I., rather than artificial general intelligence, or A.G.I. For now, and for many years in the future, any algorithm or chatbot that claims A.G.I-level performance—the ability to deal sensibly with a wide range of topics—is likely to have humans behind the curtain.

Another bot giveaway is a very poor memory. …

Bringing AI to life: ghosts

Sidney Fussell’s April 15, 2019 article for The Atlantic provides more detail about the human/AI interface as found in some Amazon products such as Alexa ( a voice-control system),

… Alexa-enabled speakers can and do interpret speech, but Amazon relies on human guidance to make Alexa, well, more human—to help the software understand different accents, recognize celebrity names, and respond to more complex commands. This is true of many artificial intelligence–enabled products. They’re prototypes. They can only approximate their promised functions while humans help with what Harvard researchers have called “the paradox of automation’s last mile.” Advancements in AI, the researchers write, create temporary jobs such as tagging images or annotating clips, even as the technology is meant to supplant human labor. In the case of the Echo, gig workers are paid to improve its voice-recognition software—but then, when it’s advanced enough, it will be used to replace the hostess in a hotel lobby.

A 2016 paper by researchers at Stanford University used a computer vision system to infer, with 88 percent accuracy, the political affiliation of 22 million people based on what car they drive and where they live. Traditional polling would require a full staff, a hefty budget, and months of work. The system completed the task in two weeks. But first, it had to know what a car was. The researchers paid workers through Amazon’s Mechanical Turk [emphasis mine] platform to manually tag thousands of images of cars, so the system would learn to differentiate between shapes, styles, and colors.

It may be a rude awakening for Amazon Echo owners, but AI systems require enormous amounts of categorized data, before, during, and after product launch. ..,

Isn’t interesting that Amazon also has a crowdsourcing marketplace for its own products. Calling it ‘Mechanical Turk’ after a famous 18th century hoax would suggest a dark sense of humour somewhere in the corporation. (You can find out more about the Amazon Mechanical Turk on this Amazon website and in its Wikipedia entry.0

Anthropologist, Mary L. Gray has coined the phrase ‘ghost work’ for the work that humans perform but for which AI gets the credit. Angela Chan’s May 13, 2019 article for The Verge features Gray as she promotes her latest book with Siddarth Suri ‘Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass’ (Note: A link has been removed),

“Ghost work” is anthropologist Mary L. Gray’s term for the invisible labor that powers our technology platforms. When Gray, a senior researcher at Microsoft Research, first arrived at the company, she learned that building artificial intelligence requires people to manage and clean up data to feed to the training algorithms. “I basically started asking the engineers and computer scientists around me, ‘Who are the people you pay to do this task work of labeling images and classification tasks and cleaning up databases?’” says Gray. Some people said they didn’t know. Others said they didn’t want to know and were concerned that if they looked too closely they might find unsavory working conditions.

So Gray decided to find out for herself. Who are the people, often invisible, who pick up the tasks necessary for these platforms to run? Why do they do this work, and why do they leave? What are their working conditions?

The interview that follows is interesting although it doesn’t seem to me that the question about working conditions is answered in any great detail. However, there is this rather interesting policy suggestion,

If companies want to happily use contract work because they need to constantly churn through new ideas and new aptitudes, the only way to make that a good thing for both sides of that enterprise is for people to be able to jump into that pool. And people do that when they have health care and other provisions. This is the business case for universal health care, for universal education as a public good. It’s going to benefit all enterprise.

I want to get across to people that, in a lot of ways, we’re describing work conditions. We’re not describing a particular type of work. We’re describing today’s conditions for project-based task-driven work. This can happen to everybody’s jobs, and I hate that that might be the motivation because we should have cared all along, as this has been happening to plenty of people. For me, the message of this book is: let’s make this not just manageable, but sustainable and enjoyable. Stop making our lives wrap around work, and start making work serve our lives.

Puts a different spin on AI and work, doesn’t it?

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

7nm (nanometre) chip shakeup

From time to time I check out the latest on attempts to shrink computer chips. In my July 11, 2014 posting I noted IBM’s announcement about developing a 7nm computer chip and later in my July 15, 2015 posting I noted IBM’s announcement of a working 7nm chip (from a July 9, 2015 IBM news release , “The breakthrough, accomplished in partnership with GLOBALFOUNDRIES and Samsung at SUNY Polytechnic Institute’s Colleges of Nanoscale Science and Engineering (SUNY Poly CNSE), could result in the ability to place more than 20 billion tiny switches — transistors — on the fingernail-sized chips that power everything from smartphones to spacecraft.”

I’m not sure what happened to the IBM/Global Foundries/Samsung partnership but Global Foundries recently announced that it will no longer be working on 7nm chips. From an August 27, 2018 Global Foundries news release,

GLOBALFOUNDRIES [GF] today announced an important step in its transformation, continuing the trajectory launched with the appointment of Tom Caulfield as CEO earlier this year. In line with the strategic direction Caulfield has articulated, GF is reshaping its technology portfolio to intensify its focus on delivering truly differentiated offerings for clients in high-growth markets.

GF is realigning its leading-edge FinFET roadmap to serve the next wave of clients that will adopt the technology in the coming years. The company will shift development resources to make its 14/12nm FinFET platform more relevant to these clients, delivering a range of innovative IP and features including RF, embedded memory, low power and more. To support this transition, GF is putting its 7nm FinFET program on hold indefinitely [emphasis mine] and restructuring its research and development teams to support its enhanced portfolio initiatives. This will require a workforce reduction, however a significant number of top technologists will be redeployed on 14/12nm FinFET derivatives and other differentiated offerings.

I tried to find a definition for FinFet but the reference to a MOSFET and in-gate transistors was too much incomprehensible information packed into a tight space, see the FinFET Wikipedia entry for more, if you dare.

Getting back to the 7nm chip issue, Samuel K. Moore (I don’t think he’s related to the Moore of Moore’s law) wrote an Aug. 28, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronics and Electrical Engineers] website) which provides some insight (Note: Links have been removed),

In a major shift in strategy, GlobalFoundries is halting its development of next-generation chipmaking processes. It had planned to move to the so-called 7-nm node, then begin to use extreme-ultraviolet lithography (EUV) to make that process cheaper. From there, it planned to develop even more advanced lithography that would allow for 5- and 3-nanometer nodes. Despite having installed at least one EUV machine at its Fab 8 facility in Malta, N.Y., all those plans are now on indefinite hold, the company announced Monday.

The move leaves only three companies reaching for the highest rungs of the Moore’s Law ladder: Intel, Samsung, and TSMC.

It’s a huge turnabout for GlobalFoundries. …

GlobalFoundries rationale for the move is that there are not enough customers that need bleeding-edge 7-nm processes to make it profitable. “While the leading edge gets most of the headlines, fewer customers can afford the transition to 7 nm and finer geometries,” said Samuel Wang, research vice president at Gartner, in a GlobalFoundries press release.

“The vast majority of today’s fabless [emphasis mine] customers are looking to get more value out of each technology generation to leverage the substantial investments required to design into each technology node,” explained GlobalFoundries CEO Tom Caulfield in a press release. “Essentially, these nodes are transitioning to design platforms serving multiple waves of applications, giving each node greater longevity. This industry dynamic has resulted in fewer fabless clients designing into the outer limits of Moore’s Law. We are shifting our resources and focus by doubling down on our investments in differentiated technologies across our entire portfolio that are most relevant to our clients in growing market segments.”

(The dynamic Caulfield describes is something the U.S. Defense Advanced Research Agency is working to disrupt with its $1.5-billion Electronics Resurgence Initiative. Darpa’s [DARPA] partners are trying to collapse the cost of design and allow older process nodes to keep improving by using 3D technology.)

Fabless manufacturing is where the fabrication is outsourced and the manufacturing company of record is focused on other matters according to the Fabless manufacturing Wikipedia entry.

Roland Moore-Colyer (I don’t think he’s related to Moore of Moore’s law either) has written August 28, 2018 article for theinquirer.net which also explores this latest news from Global Foundries (Note: Links have been removed),

EVER PREPPED A SPREAD for a party to then have less than half the people you were expecting show up? That’s probably how GlobalFoundries [sic] feels at the moment.

The chip manufacturer, which was once part of AMD, had a fabrication process geared up for 7-nanometre chips which its customers – including AMD and Qualcomm – were expected to adopt.

But AMD has confirmed that it’s decided to move its 7nm GPU production to TSMC, and Intel is still stuck trying to make chips based on 10nm fabrication.

Arguably, this could mark a stymieing of innovation and cutting-edge designs for chips in the near future. But with processors like AMD’s Threadripper 2990WX overclocked to run at 6GHz across all its 32 cores, in the real-world PC fans have no need to worry about consumer chips running out of puff anytime soon. µ

That’s all folks.

Maybe that’s not all

Steve Blank in a Sept. 10, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website) provides some provocative commentary on the Global Foundries announcement (Note: A link has been removed),

For most of our lives, the idea that computers and technology would get better, faster, and cheaper every year was as assured as the sun rising every morning. The story “GlobalFoundries Halts 7-nm Chip Development”  doesn’t sound like the end of that era, but for you and anyone who uses an electronic device, it most certainly is.

Technology innovation is going to take a different direction.

This story just goes on and on

There was a new development according to a Sept. 12, 2018 posting on the Nanoclast blog by, again, Samuel K. Moore (Note Links have been removed),

At an event today [sept. 12, 2018], Apple executives said that the new iPhone Xs and Xs Max will contain the first smartphone processor to be made using 7 nm manufacturing technology, the most advanced process node. Huawei made the same claim, to less fanfare, late last month and it’s unclear who really deserves the accolades. If anybody does, it’s TSMC, which manufactures both chips.

TSMC went into volume production with 7-nm tech in April, and rival Samsung is moving toward commercial 7-nm production later this year or in early 2019. GlobalFoundries recently abandoned its attempts to develop a 7 nm process, reasoning that the multibillion-dollar investment would never pay for itself. And Intel announced delays in its move to its next manufacturing technology, which it calls a 10-nm node but which may be equivalent to others’ 7-nm technology.

There’s a certain ‘soap opera’ quality to this with all the twists and turns.

More memory, less space and a walk down the cryptocurrency road

Libraries, archives, records management, oral history, etc. there are many institutions and names for how we manage collective and personal memory. You might call it a peculiarly human obsession stretching back into antiquity. For example, there’s the Library of Alexandria (Wikipedia entry) founded in the third, or possibly 2nd, century BCE (before the common era) and reputed to store all the knowledge in the world. It was destroyed although accounts differ as to when and how but its loss remains a potent reminder of memory’s fragility.

These days, the technology community is terribly concerned with storing ever more bits of data on materials that are reaching their limits for storage.I have news of a possible solution,  an interview of sorts with the researchers working on this new technology, and some very recent research into policies for cryptocurrency mining and development. That bit about cryptocurrency makes more sense when you read what the response to one of the interview questions.

Memory

It seems University of Alberta researchers may have found a way to increase memory exponentially, from a July 23, 2018 news item on ScienceDaily,

The most dense solid-state memory ever created could soon exceed the capabilities of current computer storage devices by 1,000 times, thanks to a new technique scientists at the University of Alberta have perfected.

“Essentially, you can take all 45 million songs on iTunes and store them on the surface of one quarter,” said Roshan Achal, PhD student in Department of Physics and lead author on the new research. “Five years ago, this wasn’t even something we thought possible.”

A July 23, 2018 University of Alberta news release (also on EurekAlert) by Jennifer-Anne Pascoe, which originated the news item, provides more information,

Previous discoveries were stable only at cryogenic conditions, meaning this new finding puts society light years closer to meeting the need for more storage for the current and continued deluge of data. One of the most exciting features of this memory is that it’s road-ready for real-world temperatures, as it can withstand normal use and transportation beyond the lab.

“What is often overlooked in the nanofabrication business is actual transportation to an end user, that simply was not possible until now given temperature restrictions,” continued Achal. “Our memory is stable well above room temperature and precise down to the atom.”

Achal explained that immediate applications will be data archival. Next steps will be increasing readout and writing speeds, meaning even more flexible applications.

More memory, less space

Achal works with University of Alberta physics professor Robert Wolkow, a pioneer in the field of atomic-scale physics. Wolkow perfected the art of the science behind nanotip technology, which, thanks to Wolkow and his team’s continued work, has now reached a tipping point, meaning scaling up atomic-scale manufacturing for commercialization.

“With this last piece of the puzzle now in-hand, atom-scale fabrication will become a commercial reality in the very near future,” said Wolkow. Wolkow’s Spin-off [sic] company, Quantum Silicon Inc., is hard at work on commercializing atom-scale fabrication for use in all areas of the technology sector.

To demonstrate the new discovery, Achal, Wolkow, and their fellow scientists not only fabricated the world’s smallest maple leaf, they also encoded the entire alphabet at a density of 138 terabytes, roughly equivalent to writing 350,000 letters across a grain of rice. For a playful twist, Achal also encoded music as an atom-sized song, the first 24 notes of which will make any video-game player of the 80s and 90s nostalgic for yesteryear but excited for the future of technology and society.

As noted in the news release, there is an atom-sized song, which is available in this video,

As for the nano-sized maple leaf, I highlighted that bit of whimsy in a June 30, 2017 posting.

Here’s a link to and a citation for the paper,

Lithography for robust and editable atomic-scale silicon devices and memories by Roshan Achal, Mohammad Rashidi, Jeremiah Croshaw, David Churchill, Marco Taucer, Taleana Huff, Martin Cloutier, Jason Pitters, & Robert A. Wolkow. Nature Communicationsvolume 9, Article number: 2778 (2018) DOI: https://doi.org/10.1038/s41467-018-05171-y Published 23 July 2018

This paper is open access.

For interested parties, you can find Quantum Silicon (QSI) here. My Edmonton geography is all but nonexistent, still, it seems to me the company address on Saskatchewan Drive is a University of Alberta address. It’s also the address for the National Research Council of Canada. Perhaps this is a university/government spin-off company?

The ‘interview’

I sent some questions to the researchers at the University of Alberta who very kindly provided me with the following answers. Roshan Achal passed on one of the questions to his colleague Taleana Huff for her response. Both Achal and Huff are associated with QSI.

Unfortunately I could not find any pictures of all three researchers (Achal, Huff, and Wolkow) together.

Roshan Achal (left) used nanotechnology perfected by his PhD supervisor, Robert Wolkow (right) to create atomic-scale computer memory that could exceed the capacity of today’s solid-state storage drives by 1,000 times. (Photo: Faculty of Science)

(1) SHRINKING THE MANUFACTURING PROCESS TO THE ATOMIC SCALE HAS
ATTRACTED A LOT OF ATTENTION OVER THE YEARS STARTING WITH SCIENCE
FICTION OR RICHARD FEYNMAN OR K. ERIC DREXLER, ETC. IN ANY EVENT, THE
ORIGINS ARE CONTESTED SO I WON’T PUT YOU ON THE SPOT BY ASKING WHO
STARTED IT ALL INSTEAD ASKING HOW DID YOU GET STARTED?

I got started in this field about 6 years ago, when I undertook a MSc
with Dr. Wolkow here at the University of Alberta. Before that point, I
had only ever heard of a scanning tunneling microscope from what was
taught in my classes. I was aware of the famous IBM logo made up from
just a handful of atoms using this machine, but I didn’t know what
else could be done. Here, Dr. Wolkow introduced me to his line of
research, and I saw the immense potential for growth in this area and
decided to pursue it further. I had the chance to interact with and
learn from nanofabrication experts and gain the skills necessary to
begin playing around with my own techniques and ideas during my PhD.

(2) AS I UNDERSTAND IT, THESE ARE THE PIECES YOU’VE BEEN
WORKING ON: (1) THE TUNGSTEN MICROSCOPE TIP, WHICH MAKE[s] (2) THE SMALLEST
QUANTUM DOTS (SINGLE ATOMS OF SILICON), (3) THE AUTOMATION OF THE
QUANTUM DOT PRODUCTION PROCESS, AND (4) THE “MOST DENSE SOLID-STATE
MEMORY EVER CREATED.” WHAT’S MISSING FROM THE LIST AND IS THAT WHAT
YOU’RE WORKING ON NOW?

One of the things missing from the list, that we are currently working
on, is the ability to easily communicate (electrically) from the
macroscale (our world) to the nanoscale, without the use of a scanning
tunneling microscope. With this, we would be able to then construct
devices using the other pieces we’ve developed up to this point, and
then integrate them with more conventional electronics. This would bring
us yet another step closer to the realization of atomic-scale
electronics.

(3) PERHAPS YOU COULD CLARIFY SOMETHING FOR ME. USUALLY WHEN SOLID STATE
MEMORY IS MENTIONED, THERE’S GREAT CONCERN ABOUT MOORE’S LAW. IS
THIS WORK GOING TO CREATE A NEW LAW? AND, WHAT IF ANYTHING DOES
;YOUR MEMORY DEVICE HAVE TO DO WITH QUANTUM COMPUTING?

That is an interesting question. With the density we’ve achieved,
there are not too many surfaces where atomic sites are more closely
spaced to allow for another factor of two improvement. In that sense, it
would be difficult to improve memory densities further using these
techniques alone. In order to continue Moore’s law, new techniques, or
storage methods would have to be developed to move beyond atomic-scale
storage.

The memory design itself does not have anything to do with quantum
computing, however, the lithographic techniques developed through our
work, may enable the development of certain quantum-dot-based quantum
computing schemes.

(4) THIS MAY BE A LITTLE OUT OF LEFT FIELD (OR FURTHER OUT THAN THE
OTHERS), COULD;YOUR MEMORY DEVICE HAVE AN IMPACT ON THE
DEVELOPMENT OF CRYPTOCURRENCY AND BLOCKCHAIN? IF SO, WHAT MIGHT THAT
IMPACT BE?

I am not very familiar with these topics, however, co-author Taleana
Huff has provided some thoughts:

Taleana Huff (downloaded from https://ca.linkedin.com/in/taleana-huff]

“The memory, as we’ve designed it, might not have too much of an
impact in and of itself. Cryptocurrencies fall into two categories.
Proof of Work and Proof of Stake. Proof of Work relies on raw
computational power to solve a difficult math problem. If you solve it,
you get rewarded with a small amount of that coin. The problem is that
it can take a lot of power and energy for your computer to crunch
through that problem. Faster access to memory alone could perhaps
streamline small parts of this slightly, but it would be very slight.
Proof of Stake is already quite power efficient and wouldn’t really
have a drastic advantage from better faster computers.

Now, atomic-scale circuitry built using these new lithographic
techniques that we’ve developed, which could perform computations at
significantly lower energy costs, would be huge for Proof of Work coins.
One of the things holding bitcoin back, for example, is that mining it
is now consuming power on the order of the annual energy consumption
required by small countries. A more efficient way to mine while still
taking the same amount of time to solve the problem would make bitcoin
much more attractive as a currency.”

Thank you to Roshan Achal and Taleana Huff for helping me to further explore the implications of their work with Dr. Wolkow.

Comments

As usual, after receiving the replies I have more questions but these people have other things to do so I’ll content myself with noting that there is something extraordinary in the fact that we can imagine a near future where atomic scale manufacturing is possible and where as Achal says, ” … storage methods would have to be developed to move beyond atomic-scale [emphasis mine] storage”. In decades past it was the stuff of science fiction or of theorists who didn’t have the tools to turn the idea into a reality. With Wolkow’s, Achal’s, Hauff’s, and their colleagues’ work, atomic scale manufacturing is attainable in the foreseeable future.

Hopefully we’ll be wiser than we have been in the past in how we deploy these new manufacturing techniques. Of course, before we need the wisdom, scientists, as  Achal notes,  need to find a new way to communicate between the macroscale and the nanoscale.

As for Huff’s comments about cryptocurrencies and cyptocurrency and blockchain technology, I stumbled across this very recent research, from a July 31, 2018 Elsevier press release (also on EurekAlert),

A study [behind a paywall] published in Energy Research & Social Science warns that failure to lower the energy use by Bitcoin and similar Blockchain designs may prevent nations from reaching their climate change mitigation obligations under the Paris Agreement.

The study, authored by Jon Truby, PhD, Assistant Professor, Director of the Centre for Law & Development, College of Law, Qatar University, Doha, Qatar, evaluates the financial and legal options available to lawmakers to moderate blockchain-related energy consumption and foster a sustainable and innovative technology sector. Based on this rigorous review and analysis of the technologies, ownership models, and jurisdictional case law and practices, the article recommends an approach that imposes new taxes, charges, or restrictions to reduce demand by users, miners, and miner manufacturers who employ polluting technologies, and offers incentives that encourage developers to create less energy-intensive/carbon-neutral Blockchain.

“Digital currency mining is the first major industry developed from Blockchain, because its transactions alone consume more electricity than entire nations,” said Dr. Truby. “It needs to be directed towards sustainability if it is to realize its potential advantages.

“Many developers have taken no account of the environmental impact of their designs, so we must encourage them to adopt consensus protocols that do not result in high emissions. Taking no action means we are subsidizing high energy-consuming technology and causing future Blockchain developers to follow the same harmful path. We need to de-socialize the environmental costs involved while continuing to encourage progress of this important technology to unlock its potential economic, environmental, and social benefits,” explained Dr. Truby.

As a digital ledger that is accessible to, and trusted by all participants, Blockchain technology decentralizes and transforms the exchange of assets through peer-to-peer verification and payments. Blockchain technology has been advocated as being capable of delivering environmental and social benefits under the UN’s Sustainable Development Goals. However, Bitcoin’s system has been built in a way that is reminiscent of physical mining of natural resources – costs and efforts rise as the system reaches the ultimate resource limit and the mining of new resources requires increasing hardware resources, which consume huge amounts of electricity.

Putting this into perspective, Dr. Truby said, “the processes involved in a single Bitcoin transaction could provide electricity to a British home for a month – with the environmental costs socialized for private benefit.

“Bitcoin is here to stay, and so, future models must be designed without reliance on energy consumption so disproportionate on their economic or social benefits.”

The study evaluates various Blockchain technologies by their carbon footprints and recommends how to tax or restrict Blockchain types at different phases of production and use to discourage polluting versions and encourage cleaner alternatives. It also analyzes the legal measures that can be introduced to encourage technology innovators to develop low-emissions Blockchain designs. The specific recommendations include imposing levies to prevent path-dependent inertia from constraining innovation:

  • Registration fees collected by brokers from digital coin buyers.
  • “Bitcoin Sin Tax” surcharge on digital currency ownership.
  • Green taxes and restrictions on machinery purchases/imports (e.g. Bitcoin mining machines).
  • Smart contract transaction charges.

According to Dr. Truby, these findings may lead to new taxes, charges or restrictions, but could also lead to financial rewards for innovators developing carbon-neutral Blockchain.

The press release doesn’t fully reflect Dr. Truby’s thoughtfulness or the incentives he has suggested. it’s not all surcharges, taxes, and fees constitute encouragement.  Here’s a sample from the conclusion,

The possibilities of Blockchain are endless and incentivisation can help solve various climate change issues, such as through the development of digital currencies to fund climate finance programmes. This type of public-private finance initiative is envisioned in the Paris Agreement, and fiscal tools can incentivize innovators to design financially rewarding Blockchain technology that also achieves environmental goals. Bitcoin, for example, has various utilitarian intentions in its White Paper, which may or may not turn out to be as envisioned, but it would not have been such a success without investors seeking remarkable returns. Embracing such technology, and promoting a shift in behaviour with such fiscal tools, can turn the industry itself towards achieving innovative solutions for environmental goals.

I realize Wolkow, et. al, are not focused on cryptocurrency and blockchain technology per se but as Huff notes in her reply, “… new lithographic techniques that we’ve developed, which could perform computations at significantly lower energy costs, would be huge for Proof of Work coins.”

Whether or not there are implications for cryptocurrencies, energy needs, climate change, etc., it’s the kind of innovative work being done by scientists at the University of Alberta which may have implications in fields far beyond the researchers’ original intentions such as more efficient computation and data storage.

ETA Aug. 6, 2018: Dexter Johnson weighed in with an August 3, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website),

Researchers at the University of Alberta in Canada have developed a new approach to rewritable data storage technology by using a scanning tunneling microscope (STM) to remove and replace hydrogen atoms from the surface of a silicon wafer. If this approach realizes its potential, it could lead to a data storage technology capable of storing 1,000 times more data than today’s hard drives, up to 138 terabytes per square inch.

As a bit of background, Gerd Binnig and Heinrich Rohrer developed the first STM in 1986 for which they later received the Nobel Prize in physics. In the over 30 years since an STM first imaged an atom by exploiting a phenomenon known as tunneling—which causes electrons to jump from the surface atoms of a material to the tip of an ultrasharp electrode suspended a few angstroms above—the technology has become the backbone of so-called nanotechnology.

In addition to imaging the world on the atomic scale for the last thirty years, STMs have been experimented with as a potential data storage device. Last year, we reported on how IBM (where Binnig and Rohrer first developed the STM) used an STM in combination with an iron atom to serve as an electron-spin resonance sensor to read the magnetic pole of holmium atoms. The north and south poles of the holmium atoms served as the 0 and 1 of digital logic.

The Canadian researchers have taken a somewhat different approach to making an STM into a data storage device by automating a known technique that uses the ultrasharp tip of the STM to apply a voltage pulse above an atom to remove individual hydrogen atoms from the surface of a silicon wafer. Once the atom has been removed, there is a vacancy on the surface. These vacancies can be patterned on the surface to create devices and memories.

If you have the time, I recommend reading Dexter’s posting as he provides clear explanations, additional insight into the work, and more historical detail.

How to get people to trust artificial intelligence

Vyacheslav Polonski’s (University of Oxford researcher) January 10, 2018 piece (originally published Jan. 9, 2018 on The Conversation) on phys.org isn’t a gossip article although there are parts that could be read that way. Before getting to what I consider the juicy bits (Note: Links have been removed),

Artificial intelligence [AI] can already predict the future. Police forces are using it to map when and where crime is likely to occur [Note: See my Nov. 23, 2017 posting about predictive policing in Vancouver for details about the first Canadian municipality to introduce the technology]. Doctors can use it to predict when a patient is most likely to have a heart attack or stroke. Researchers are even trying to give AI imagination so it can plan for unexpected consequences.

Many decisions in our lives require a good forecast, and AI agents are almost always better at forecasting than their human counterparts. Yet for all these technological advances, we still seem to deeply lack confidence in AI predictions. Recent cases show that people don’t like relying on AI and prefer to trust human experts, even if these experts are wrong.

The part (juicy bits) that satisfied some of my long held curiosity was this section on Watson and its life as a medical adjunct (Note: Links have been removed),

IBM’s attempt to promote its supercomputer programme to cancer doctors (Watson for Onology) was a PR [public relations] disaster. The AI promised to deliver top-quality recommendations on the treatment of 12 cancers that accounted for 80% of the world’s cases. As of today, over 14,000 patients worldwide have received advice based on its calculations.

But when doctors first interacted with Watson they found themselves in a rather difficult situation. On the one hand, if Watson provided guidance about a treatment that coincided with their own opinions, physicians did not see much value in Watson’s recommendations. The supercomputer was simply telling them what they already know, and these recommendations did not change the actual treatment. This may have given doctors some peace of mind, providing them with more confidence in their own decisions. But IBM has yet to provide evidence that Watson actually improves cancer survival rates.

On the other hand, if Watson generated a recommendation that contradicted the experts’ opinion, doctors would typically conclude that Watson wasn’t competent. And the machine wouldn’t be able to explain why its treatment was plausible because its machine learning algorithms were simply too complex to be fully understood by humans. Consequently, this has caused even more mistrust and disbelief, leading many doctors to ignore the seemingly outlandish AI recommendations and stick to their own expertise.

As a result, IBM Watson’s premier medical partner, the MD Anderson Cancer Center, recently announced it was dropping the programme. Similarly, a Danish hospital reportedly abandoned the AI programme after discovering that its cancer doctors disagreed with Watson in over two thirds of cases.

The problem with Watson for Oncology was that doctors simply didn’t trust it. Human trust is often based on our understanding of how other people think and having experience of their reliability. …

It seems to me there might be a bit more to the doctors’ trust issues and I was surprised it didn’t seem to have occurred to Polonski. Then I did some digging (from Polonski’s webpage on the Oxford Internet Institute website),

Vyacheslav Polonski (@slavacm) is a DPhil [PhD] student at the Oxford Internet Institute. His research interests are located at the intersection of network science, media studies and social psychology. Vyacheslav’s doctoral research examines the adoption and use of social network sites, focusing on the effects of social influence, social cognition and identity construction.

Vyacheslav is a Visiting Fellow at Harvard University and a Global Shaper at the World Economic Forum. He was awarded the Master of Science degree with Distinction in the Social Science of the Internet from the University of Oxford in 2013. He also obtained the Bachelor of Science degree with First Class Honours in Management from the London School of Economics and Political Science (LSE) in 2012.

Vyacheslav was honoured at the British Council International Student of the Year 2011 awards, and was named UK’s Student of the Year 2012 and national winner of the Future Business Leader of the Year 2012 awards by TARGETjobs.

Previously, he has worked as a management consultant at Roland Berger Strategy Consultants and gained further work experience at the World Economic Forum, PwC, Mars, Bertelsmann and Amazon.com. Besides, he was involved in several start-ups as part of the 2012 cohort of Entrepreneur First and as part of the founding team of the London office of Rocket Internet. Vyacheslav was the junior editor of the bi-lingual book ‘Inspire a Nation‘ about Barack Obama’s first presidential election campaign. In 2013, he was invited to be a keynote speaker at the inaugural TEDx conference of IE University in Spain to discuss the role of a networked mindset in everyday life.

Vyacheslav is fluent in German, English and Russian, and is passionate about new technologies, social entrepreneurship, philanthropy, philosophy and modern art.

Research interests

Network science, social network analysis, online communities, agency and structure, group dynamics, social interaction, big data, critical mass, network effects, knowledge networks, information diffusion, product adoption

Positions held at the OII

  • DPhil student, October 2013 –
  • MSc Student, October 2012 – August 2013

Polonski doesn’t seem to have any experience dealing with, participating in, or studying the medical community. Getting a doctor to admit that his or her approach to a particular patient’s condition was wrong or misguided runs counter to their training and, by extension, the institution of medicine. Also, one of the biggest problems in any field is getting people to change and it’s not always about trust. In this instance, you’re asking a doctor to back someone else’s opinion after he or she has rendered theirs. This is difficult even when the other party is another human doctor let alone a form of artificial intelligence.

If you want to get a sense of just how hard it is to get someone to back down after they’ve committed to a position, read this January 10, 2018 essay by Lara Bazelon, an associate professor at the University of San Francisco School of Law. This is just one of the cases (Note: Links have been removed),

Davontae Sanford was 14 years old when he confessed to murdering four people in a drug house on Detroit’s East Side. Left alone with detectives in a late-night interrogation, Sanford says he broke down after being told he could go home if he gave them “something.” On the advice of a lawyer whose license was later suspended for misconduct, Sanders pleaded guilty in the middle of his March 2008 trial and received a sentence of 39 to 92 years in prison.

Sixteen days after Sanford was sentenced, a hit man named Vincent Smothers told the police he had carried out 12 contract killings, including the four Sanford had pleaded guilty to committing. Smothers explained that he’d worked with an accomplice, Ernest Davis, and he provided a wealth of corroborating details to back up his account. Smothers told police where they could find one of the weapons used in the murders; the gun was recovered and ballistics matched it to the crime scene. He also told the police he had used a different gun in several of the other murders, which ballistics tests confirmed. Once Smothers’ confession was corroborated, it was clear Sanford was innocent. Smothers made this point explicitly in an 2015 affidavit, emphasizing that Sanford hadn’t been involved in the crimes “in any way.”

Guess what happened? (Note: Links have been removed),

But Smothers and Davis were never charged. Neither was Leroy Payne, the man Smothers alleged had paid him to commit the murders. …

Davontae Sanford, meanwhile, remained behind bars, locked up for crimes he very clearly didn’t commit.

Police failed to turn over all the relevant information in Smothers’ confession to Sanford’s legal team, as the law required them to do. When that information was leaked in 2009, Sanford’s attorneys sought to reverse his conviction on the basis of actual innocence. Wayne County Prosecutor Kym Worthy fought back, opposing the motion all the way to the Michigan Supreme Court. In 2014, the court sided with Worthy, ruling that actual innocence was not a valid reason to withdraw a guilty plea [emphasis mine]. Sanford would remain in prison for another two years.

Doctors are just as invested in their opinions and professional judgments as lawyers  (just like  the prosecutor and the judges on the Michigan Supreme Court) are.

There is one more problem. From the doctor’s (or anyone else’s perspective), if the AI is making the decisions, why do he/she need to be there? At best it’s as if AI were turning the doctor into its servant or, at worst, replacing the doctor. Polonski alludes to the problem in one of his solutions to the ‘trust’ issue (Note: A link has been removed),

Research suggests involving people more in the AI decision-making process could also improve trust and allow the AI to learn from human experience. For example,one study showed people were given the freedom to slightly modify an algorithm felt more satisfied with its decisions, more likely to believe it was superior and more likely to use it in the future.

Having input into the AI decision-making process somewhat addresses one of the problems but the commitment to one’s own judgment even when there is overwhelming evidence to the contrary is a perennially thorny problem. The legal case mentioned here earlier is clearly one where the contrarian is wrong but it’s not always that obvious. As well, sometimes, people who hold out against the majority are right.

US Army

Getting back to building trust, it turns out the US Army Research Laboratory is also interested in transparency where AI is concerned (from a January 11, 2018 US Army news release on EurekAlert),

U.S. Army Research Laboratory [ARL] scientists developed ways to improve collaboration between humans and artificially intelligent agents in two projects recently completed for the Autonomy Research Pilot Initiative supported by the Office of Secretary of Defense. They did so by enhancing the agent transparency [emphasis mine], which refers to a robot, unmanned vehicle, or software agent’s ability to convey to humans its intent, performance, future plans, and reasoning process.

“As machine agents become more sophisticated and independent, it is critical for their human counterparts to understand their intent, behaviors, reasoning process behind those behaviors, and expected outcomes so the humans can properly calibrate their trust [emphasis mine] in the systems and make appropriate decisions,” explained ARL’s Dr. Jessie Chen, senior research psychologist.

The U.S. Defense Science Board, in a 2016 report, identified six barriers to human trust in autonomous systems, with ‘low observability, predictability, directability and auditability’ as well as ‘low mutual understanding of common goals’ being among the key issues.

In order to address these issues, Chen and her colleagues developed the Situation awareness-based Agent Transparency, or SAT, model and measured its effectiveness on human-agent team performance in a series of human factors studies supported by the ARPI. The SAT model deals with the information requirements from an agent to its human collaborator in order for the human to obtain effective situation awareness of the agent in its tasking environment. At the first SAT level, the agent provides the operator with the basic information about its current state and goals, intentions, and plans. At the second level, the agent reveals its reasoning process as well as the constraints/affordances that the agent considers when planning its actions. At the third SAT level, the agent provides the operator with information regarding its projection of future states, predicted consequences, likelihood of success/failure, and any uncertainty associated with the aforementioned projections.

In one of the ARPI projects, IMPACT, a research program on human-agent teaming for management of multiple heterogeneous unmanned vehicles, ARL’s experimental effort focused on examining the effects of levels of agent transparency, based on the SAT model, on human operators’ decision making during military scenarios. The results of a series of human factors experiments collectively suggest that transparency on the part of the agent benefits the human’s decision making and thus the overall human-agent team performance. More specifically, researchers said the human’s trust in the agent was significantly better calibrated — accepting the agent’s plan when it is correct and rejecting it when it is incorrect– when the agent had a higher level of transparency.

The other project related to agent transparency that Chen and her colleagues performed under the ARPI was Autonomous Squad Member, on which ARL collaborated with Naval Research Laboratory scientists. The ASM is a small ground robot that interacts with and communicates with an infantry squad. As part of the overall ASM program, Chen’s group developed transparency visualization concepts, which they used to investigate the effects of agent transparency levels on operator performance. Informed by the SAT model, the ASM’s user interface features an at a glance transparency module where user-tested iconographic representations of the agent’s plans, motivator, and projected outcomes are used to promote transparent interaction with the agent. A series of human factors studies on the ASM’s user interface have investigated the effects of agent transparency on the human teammate’s situation awareness, trust in the ASM, and workload. The results, consistent with the IMPACT project’s findings, demonstrated the positive effects of agent transparency on the human’s task performance without increase of perceived workload. The research participants also reported that they felt the ASM as more trustworthy, intelligent, and human-like when it conveyed greater levels of transparency.

Chen and her colleagues are currently expanding the SAT model into bidirectional transparency between the human and the agent.

“Bidirectional transparency, although conceptually straightforward–human and agent being mutually transparent about their reasoning process–can be quite challenging to implement in real time. However, transparency on the part of the human should support the agent’s planning and performance–just as agent transparency can support the human’s situation awareness and task performance, which we have demonstrated in our studies,” Chen hypothesized.

The challenge is to design the user interfaces, which can include visual, auditory, and other modalities, that can support bidirectional transparency dynamically, in real time, while not overwhelming the human with too much information and burden.

Interesting, yes? Here’s a link and a citation for the paper,

Situation Awareness-based Agent Transparency and Human-Autonomy Teaming Effectiveness by Jessie Y.C. Chen, Shan G. Lakhmani, Kimberly Stowers, Anthony R. Selkowitz, Julia L. Wright, and Michael Barnes. Theoretical Issues in Ergonomics Science May 2018. DOI 10.1080/1463922X.2017.1315750

This paper is behind a paywall.

Canada’s ‘Smart Cities’ will need new technology (5G wireless) and, maybe, graphene

I recently published [March 20, 2018] a piece on ‘smart cities’ both an art/science event in Toronto and a Canadian government initiative without mentioning the necessity of new technology to support all of the grand plans. On that note, it seems the Canadian federal government and two provincial (Québec and Ontario) governments are prepared to invest in one of the necessary ‘new’ technologies, 5G wireless. The Canadian Broadcasting Corporation’s (CBC) Shawn Benjamin reports about Canada’s 5G plans in suitably breathless (even in text only) tones of excitement in a March 19, 2018 article,

The federal, Ontario and Quebec governments say they will spend $200 million to help fund research into 5G wireless technology, the next-generation networks with download speeds 100 times faster than current ones can handle.

The so-called “5G corridor,” known as ENCQOR, will see tech companies such as Ericsson, Ciena Canada, Thales Canada, IBM and CGI kick in another $200 million to develop facilities to get the project up and running.

The idea is to set up a network of linked research facilities and laboratories that these companies — and as many as 1,000 more across Canada — will be able to use to test products and services that run on 5G networks.

Benjamin’s description of 5G is focused on what it will make possible in the future,

If you think things are moving too fast, buckle up, because a new 5G cellular network is just around the corner and it promises to transform our lives by connecting nearly everything to a new, much faster, reliable wireless network.

The first networks won’t be operational for at least a few years, but technology and telecom companies around the world are already planning to spend billions to make sure they aren’t left behind, says Lawrence Surtees, a communications analyst with the research firm IDC.

The new 5G is no tentative baby step toward the future. Rather, as Surtees puts it, “the move from 4G to 5G is a quantum leap.”

In a downtown Toronto soundstage, Alan Smithson recently demonstrated a few virtual reality and augmented reality projects that his company MetaVRse is working on.

The potential for VR and AR technology is endless, he said, in large part for its potential to help hurdle some of the walls we are already seeing with current networks.

Virtual Reality technology on the market today is continually increasing things like frame rates and screen resolutions in a constant quest to make their devices even more lifelike.

… They [current 4G networks] can’t handle the load. But 5G can do so easily, Smithson said, so much so that the current era of bulky augmented reality headsets could be replaced buy a pair of normal looking glasses.

In a 5G world, those internet-connected glasses will automatically recognize everyone you meet, and possibly be able to overlay their name in your field of vision, along with a link to their online profile. …

Benjamin also mentions ‘smart cities’,

In a University of Toronto laboratory, Professor Alberto Leon-Garcia researches connected vehicles and smart power grids. “My passion right now is enabling smart cities — making smart cities a reality — and that means having much more immediate and detailed sense of the environment,” he said.

Faster 5G networks will assist his projects in many ways, by giving planners more, instant data on things like traffic patterns, energy consumption, variou carbon footprints and much more.

Leon-Garcia points to a brightly lit map of Toronto [image embedded in Benjamin’s article] in his office, and explains that every dot of light represents a sensor transmitting real time data.

Currently, the network is hooked up to things like city buses, traffic cameras and the city-owned fleet of shared bicycles. He currently has thousands of data points feeding him info on his map, but in a 5G world, the network will support about a million sensors per square kilometre.

Very exciting but where is all this data going? What computers will be processing the information? Where are these sensors located? Benjamin does not venture into those waters nor does The Economist in a February 13, 2018 article about 5G, the Olympic Games in Pyeonchang, South Korea, but the magazine does note another barrier to 5G implementation,

“FASTER, higher, stronger,” goes the Olympic motto. So it is only appropriate that the next generation of wireless technology, “5G” for short, should get its first showcase at the Winter Olympics  under way in Pyeongchang, South Korea. Once fully developed, it is supposed to offer download speeds of at least 20 gigabits per second (4G manages about half that at best) and response times (“latency”) of below 1 millisecond. So the new networks will be able to transfer a high-resolution movie in two seconds and respond to requests in less than a hundredth of the time it takes to blink an eye. But 5G is not just about faster and swifter wireless connections.

The technology is meant to enable all sorts of new services. One such would offer virtual- or augmented-reality experiences. At the Olympics, for example, many contestants are being followed by 360-degree video cameras. At special venues sports fans can don virtual-reality goggles to put themselves right into the action. But 5G is also supposed to become the connective tissue for the internet of things, to link anything from smartphones to wireless sensors and industrial robots to self-driving cars. This will be made possible by a technique called “network slicing”, which allows operators quickly to create bespoke networks that give each set of devices exactly the connectivity they need.

Despite its versatility, it is not clear how quickly 5G will take off. The biggest brake will be economic. [emphasis mine] When the GSMA, an industry group, last year asked 750 telecoms bosses about the most salient impediment to delivering 5G, more than half cited the lack of a clear business case. People may want more bandwidth, but they are not willing to pay for it—an attitude even the lure of the fanciest virtual-reality applications may not change. …

That may not be the only brake, Dexter Johnson in a March 19, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), covers some of the others (Note: Links have been removed),

Graphene has been heralded as a “wonder material” for well over a decade now, and 5G has been marketed as the next big thing for at least the past five years. Analysts have suggested that 5G could be the golden ticket to virtual reality and artificial intelligence, and promised that graphene could improve technologies within electronics and optoelectronics.

But proponents of both graphene and 5G have also been accused of stirring up hype. There now seems to be a rising sense within industry circles that these glowing technological prospects will not come anytime soon.

At Mobile World Congress (MWC) in Barcelona last month [February 2018], some misgivings for these long promised technologies may have been put to rest, though, thanks in large part to each other.

In a meeting at MWC with Jari Kinaret, a professor at Chalmers University in Sweden and director of the Graphene Flagship, I took a guided tour around the Pavilion to see some of the technologies poised to have an impact on the development of 5G.

Being invited back to the MWC for three years is a pretty clear indication of how important graphene is to those who are trying to raise the fortunes of 5G. But just how important became more obvious to me in an interview with Frank Koppens, the leader of the quantum nano-optoelectronic group at Institute of Photonic Sciences (ICFO) just outside of Barcelona, last year.

He said: “5G cannot just scale. Some new technology is needed. And that’s why we have several companies in the Graphene Flagship that are putting a lot of pressure on us to address this issue.”

In a collaboration led by CNIT—a consortium of Italian universities and national laboratories focused on communication technologies—researchers from AMO GmbH, Ericsson, Nokia Bell Labs, and Imec have developed graphene-based photodetectors and modulators capable of receiving and transmitting optical data faster than ever before.

The aim of all this speed for transmitting data is to support the ultrafast data streams with extreme bandwidth that will be part of 5G. In fact, at another section during MWC, Ericsson was presenting the switching of a 100 Gigabits per second (Gbps) channel based on the technology.

“The fact that Ericsson is demonstrating another version of this technology demonstrates that from Ericsson’s point of view, this is no longer just research” said Kinaret.

It’s no mystery why the big mobile companies are jumping on this technology. Not only does it provide high-speed data transmission, but it also does it 10 times more efficiently than silicon or doped silicon devices, and will eventually do it more cheaply than those devices, according to Vito Sorianello, senior researcher at CNIT.

Interestingly, Ericsson is one of the tech companies mentioned with regard to Canada’s 5G project, ENCQOR and Sweden’s Chalmers University, as Dexter Johnson notes, is the lead institution for the Graphene Flagship.. One other fact to note, Canada’s resources include graphite mines with ‘premium’ flakes for producing graphene. Canada’s graphite mines are located (as far as I know) in only two Canadian provinces, Ontario and Québec, which also happen to be pitching money into ENCQOR. My March 21, 2018 posting describes the latest entry into the Canadian graphite mining stakes.

As for the questions I posed about processing power, etc. It seems the South Koreans have found answers of some kind but it’s hard to evaluate as I haven’t found any additional information about 5G and its implementation in South Korea. If anyone has answers, please feel free to leave them in the ‘comments’. Thank you.

Quantum computing and more at SXSW (South by Southwest) 2018

It’s that time of year again. The entertainment conference such as South by South West (SXSW) is being held from March 9-18, 2018. The science portion of the conference can be found in the Intelligent Future sessions, from the description,

AI and new technologies embody the realm of possibilities where intelligence empowers and enables technology while sparking legitimate concerns about its uses. Highlighted Intelligent Future sessions include New Mobility and the Future of Our Cities, Mental Work: Moving Beyond Our Carbon Based Minds, Can We Create Consciousness in a Machine?, and more.

Intelligent Future Track sessions are held March 9-15 at the Fairmont.

Last year I focused on the conference sessions on robots, Hiroshi Ishiguro’s work, and artificial intelligence in a  March 27, 2017 posting. This year I’m featuring one of the conference’s quantum computing session, from a March 9, 2018 University of Texas at Austin news release  (also on EurekAlert),

Imagine a new kind of computer that can quickly solve problems that would stump even the world’s most powerful supercomputers. Quantum computers are fundamentally different. They can store information as not only just ones and zeros, but in all the shades of gray in-between. Several companies and government agencies are investing billions of dollars in the field of quantum information. But what will quantum computers be used for?

South by Southwest 2018 hosts a panel on March 10th [2018] called Quantum Computing: Science Fiction to Science Fact. Experts on quantum computing make up the panel, including Jerry Chow of IBM; Bo Ewald of D-Wave Systems; Andrew Fursman of 1QBit; and Antia Lamas-Linares of the Texas Advanced Computing Center at UT Austin.

Antia Lamas-Linares is a Research Associate in the High Performance Computing group at TACC. Her background is as an experimentalist with quantum computing systems, including work done with them at the Centre for Quantum Technologies in Singapore. She joins podcast host Jorge Salazar to talk about her South by Southwest panel and about some of her latest research on quantum information.

Lamas-Linares co-authored a study (doi: 10.1117/12.2290561) in the Proceedings of the SPIE, The International Society for Optical Engineering, that published in February of 2018. The study, “Secure Quantum Clock Synchronization,” proposed a protocol to verify and secure time synchronization of distant atomic clocks, such as those used for GPS signals in cell phone towers and other places. “It’s important work,” explained Lamas-Linares, “because people are worried about malicious parties messing with the channels of GPS. What James Troupe (Applied Research Laboratories, UT Austin) and I looked at was whether we can use techniques from quantum cryptography and quantum information to make something that is inherently unspoofable.”

Antia Lamas-Linares: The most important thing is that quantum technologies is a really exciting field. And it’s exciting in a fundamental sense. We don’t quite know what we’re going to get out of it. We know a few things, and that’s good enough to drive research. But the things we don’t know are much broader than the things we know, and it’s going to be really interesting. Keep your eyes open for this.

Quantum Computing: Science Fiction to Science Fact, March 10, 2018 | 11:00AM – 12:00PM, Fairmont Manchester EFG, SXSW 2018, Austin, TX.

If you look up the session, you will find,

Quantum Computing: Science Fiction to Science Fact

Quantum Computing: Science Fiction to Science Fact

Speakers

Bo Ewald

D-Wave Systems

Antia Lamas-Linares

Texas Advanced Computing Center at University of Texas

Startups and established players have sold 2000 Qubit systems, made freely available cloud access to quantum computer processors, and created large scale open source initiatives, all taking quantum computing from science fiction to science fact. Government labs and others like IBM, Microsoft, Google are developing software for quantum computers. What problems will be solved with this quantum leap in computing power that cannot be solved today with the world’s most powerful supercomputers?

[Programming descriptions are generated by participants and do not necessarily reflect the opinions of SXSW.]

Favorited by (1128)

View all

Primary Entry: Platinum Badge, Interactive Badge

Secondary Entry: Music Badge, Film Badge

Format: Panel

Event Type: Session

Track: Intelligent Future

Level: Intermediate

I wonder what ‘level’ means? I was not able to find an answer (quickly).

It’s was a bit surprising to find someone from D-Wave Systems (a Vancouver-based quantum computing based enterprise) at an entertainment conference. Still, it shouldn’t have been. Two other examples immediately come to mind, the TED (technology, entertainment, and design) conferences have been melding technology, if not science, with creative activities of all kinds for many years (TED 2018: The Age of Amazement, April 10 -14, 2018 in Vancouver [Canada]) and Beakerhead (2018 dates: Sept. 19 – 23) has been melding art, science, and engineering in a festival held in Calgary (Canada) since 2013. One comment about TED, it was held for several years in California (1984, 1990 – 2013) and moved to Vancouver in 2014.

For anyone wanting to browse the 2018 SxSW Intelligent Future sessions online, go here. or wanting to hear Antia Lamas-Linares talk about quantum computing, there’s the interview with Jorge Salazar (mentioned in the news release),

Machine learning, neural networks, and knitting

In a recent (Tuesday, March 6, 2018) live stream ‘conversation’ (‘Science in Canada; Investing in Canadian Innovation’ now published on YouTube) between Canadian Prime Minister, Justin Trudeau, and US science communicator, Bill Nye, at the University of Ottawa, they discussed, amongst many other topics, what AI (artificial intelligence) can and can’t do. They seemed to agree that AI can’t be creative, i.e., write poetry, create works of art, make jokes, etc. A conclusion which is both (in my opinion) true and not true.

There are times when I think the joke may be on us (humans). Take for example this March 6, 2018 story by Alexis Madrigal for The Atlantic magazine (Note: Links have been removed),

SkyKnit: How an AI Took Over an Adult Knitting Community

Ribald knitters teamed up with a neural-network creator to generate new types of tentacled, cozy shapes.

Janelle Shane is a humorist [Note: She describes herself as a “Research Scientist in optics. Plays with neural networks. …” in her Twitter bio.] who creates and mines her material from neural networks, the form of machine learning that has come to dominate the field of artificial intelligence over the last half-decade.

Perhaps you’ve seen the candy-heart slogans she generated for Valentine’s Day: DEAR ME, MY MY, LOVE BOT, CUTE KISS, MY BEAR, and LOVE BUN.

Or her new paint-color names: Parp Green, Shy Bather, Farty Red, and Bull Cream.

Or her neural-net-generated Halloween costumes: Punk Tree, Disco Monster, Spartan Gandalf, Starfleet Shark, and A Masked Box.

Her latest project, still ongoing, pushes the joke into a new, physical realm. Prodded by a knitter on the knitting forum Ravelry, Shane trained a type of neural network on a series of over 500 sets of knitting instructions. Then, she generated new instructions, which members of the Ravelry community have actually attempted to knit.

“The knitting project has been a particularly fun one so far just because it ended up being a dialogue between this computer program and these knitters that went over my head in a lot of ways,” Shane told me. “The computer would spit out a whole bunch of instructions that I couldn’t read and the knitters would say, this is the funniest thing I’ve ever read.”

It appears that the project evolved,

The human-machine collaboration created configurations of yarn that you probably wouldn’t give to your in-laws for Christmas, but they were interesting. The user citikas was the first to post a try at one of the earliest patterns, “reverss shawl.” It was strange, but it did have some charisma.

Shane nicknamed the whole effort “Project Hilarious Disaster.” The community called it SkyKnit.

I’m not sure what’s meant by “community” as mentioned in the previous excerpt. Are we talking about humans only, AI only, or both humans and AI?

Here’s some of what underlies Skyknit (Note: Links have been removed),

The different networks all attempt to model the data they’ve been fed by tuning a vast, funky flowchart. After you’ve created a statistical model that describes your real data, you can also roll the dice and generate new, never-before-seen data of the same kind.

How this works—like, the math behind it—is very hard to visualize because values inside the model can have hundreds of dimensions and we are humble three-dimensional creatures moving through time. But as the neural-network enthusiast Robin Sloan puts it, “So what? It turns out imaginary spaces are useful even if you can’t, in fact, imagine them.”

Out of that ferment, a new kind of art has emerged. Its practitioners use neural networks not to attain practical results, but to see what’s lurking in the these vast, opaque systems. What did the machines learn about the world as they attempted to understand the data they’d been fed? Famously, Google released DeepDream, which produced trippy visualizations that also demonstrated how that type of neural network processed the textures and objects in its source imagery.

Madrigal’s article is well worth reading if you have the time. You can also supplement Madrigal’s piece with an August 9, 2017 article about Janelle Shane’s algorithmic experiments by Jacob Brogan for slate.com.

I found some SkyKnit examples on Ravelry including this one from the Dollybird Workshop,

© Chatelaine

SkyKnit fancy addite rifopshent
by SkyKnit
Published in
Dollybird Workshop
SkyKnit
Craft
Knitting
Category
Stitch pattern
Published
February 2018
Suggested yarn
Yarn weight
Fingering (14 wpi) ?
Gauge
24 stitches and 30 rows = 4 inches
in stockinette stitch
Needle size
US 4 – 3.5 mm

written-pattern

This pattern is available as a free Ravelry download

SkyKnit is a type of machine learning algorithm called an artificial neural network. Its creator, Janelle Shane of AIweirdness.com, gave it 88,000 lines of knitting instructions from Stitch-Maps.com and Ravelry, and it taught itself how to make new patterns. Join the discussion!

SkyKnit seems to have created something that has paralell columns, and is reversible. Perhaps a scarf?

Test-knitting & image courtesy of Chatelaine

Patterns may include notes from testknitters; yarn, needles, and gauge are totally at your discretion.

About the designer
SkyKnit’s favorites include lace, tentacles, and totally not the elimination of the human race.
For more information, see: http://aiweirdness.com/

Shane’s website, aiweirdness.com, is where she posts musings such as this (from a March 2, [?] 2018 posting), Note: A link has been removed,

If you’ve been on the internet today, you’ve probably interacted with a neural network. They’re a type of machine learning algorithm that’s used for everything from language translation to finance modeling. One of their specialties is image recognition. Several companies – including Google, Microsoft, IBM, and Facebook – have their own algorithms for labeling photos. But image recognition algorithms can make really bizarre mistakes.

image

Microsoft Azure’s computer vision API [application programming interface] added the above caption and tags. But there are no sheep in the image of above. None. I zoomed all the way in and inspected every speck.

….

I have become quite interested in Shane’s self descriptions such as this one from the aiweirdness.com website,

Portrait/Logo

About

I train neural networks, a type of machine learning algorithm, to write unintentional humor as they struggle to imitate human datasets. Well, I intend the humor. The neural networks are just doing their best to understand what’s going on. Currently located on the occupied land of the Arapahoe Nation.
https://wandering.shop/@janellecshane

As for the joke being on us, I can’t help remembering the Facebook bots that developed their own language (Facebotlish), and were featured in my June 30, 2017 posting, There’s a certain eerieness to it all, which seems an appropriate response in a year celebrating the 200th anniversary of Mary Shelley’s 1818 book, Frankenstein; or, the Modern Prometheus. I’m closing with a video clip from the 1931 movie,

Happy Weekend!

New nanomapping technology: CRISPR-CAS9 as a programmable nanoparticle

A November 21, 2017 news item on Nanowerk describes a rather extraordinary (to me, anyway) approach to using CRRISP ( Clustered Regularly Interspaced Short Palindromic Repeats)-CAS9 (Note: A link has been removed),

A team of scientists led by Virginia Commonwealth University physicist Jason Reed, Ph.D., have developed new nanomapping technology that could transform the way disease-causing genetic mutations are diagnosed and discovered. Described in a study published today [November 21, 2017] in the journal Nature Communications (“DNA nanomapping using CRISPR-Cas9 as a programmable nanoparticle”), this novel approach uses high-speed atomic force microscopy (AFM) combined with a CRISPR-based chemical barcoding technique to map DNA nearly as accurately as DNA sequencing while processing large sections of the genome at a much faster rate. What’s more–the technology can be powered by parts found in your run-of-the-mill DVD player.

A November 21, 2017 Virginia Commonwealth University news release by John Wallace, which originated the news item, provides more detail,

The human genome is made up of billions of DNA base pairs. Unraveled, it stretches to a length of nearly six feet long. When cells divide, they must make a copy of their DNA for the new cell. However, sometimes various sections of the DNA are copied incorrectly or pasted together at the wrong location, leading to genetic mutations that cause diseases such as cancer. DNA sequencing is so precise that it can analyze individual base pairs of DNA. But in order to analyze large sections of the genome to find genetic mutations, technicians must determine millions of tiny sequences and then piece them together with computer software. In contrast, biomedical imaging techniques such as fluorescence in situ hybridization, known as FISH, can only analyze DNA at a resolution of several hundred thousand base pairs.

Reed’s new high-speed AFM method can map DNA to a resolution of tens of base pairs while creating images up to a million base pairs in size. And it does it using a fraction of the amount of specimen required for DNA sequencing.

“DNA sequencing is a powerful tool, but it is still quite expensive and has several technological and functional limitations that make it difficult to map large areas of the genome efficiently and accurately,” said Reed, principal investigator on the study. Reed is a member of the Cancer Molecular Genetics research program at VCU Massey Cancer Center and an associate professor in the Department of Physics in the College of Humanities and Sciences.

“Our approach bridges the gap between DNA sequencing and other physical mapping techniques that lack resolution,” he said. “It can be used as a stand-alone method or it can complement DNA sequencing by reducing complexity and error when piecing together the small bits of genome analyzed during the sequencing process.”

IBM scientists made headlines in 1989 when they developed AFM technology and used a related technique to rearrange molecules at the atomic level to spell out “IBM.” AFM achieves this level of detail by using a microscopic stylus — similar to a needle on a record player — that barely makes contact with the surface of the material being studied. The interaction between the stylus and the molecules creates the image. However, traditional AFM is too slow for medical applications and so it is primarily used by engineers in materials science.

“Our device works in the same fashion as AFM but we move the sample past the stylus at a much greater velocity and use optical instruments to detect the interaction between the stylus and the molecules. We can achieve the same level of detail as traditional AFM but can process material more than a thousand times faster,” said Reed, whose team proved the technology can be mainstreamed by using optical equipment found in DVD players. “High-speed AFM is ideally suited for some medical applications as it can process materials quickly and provide hundreds of times more resolution than comparable imaging methods.”

Increasing the speed of AFM was just one hurdle Reed and his colleagues had to overcome. In order to actually identify genetic mutations in DNA, they had to develop a way to place markers or labels on the surface of the DNA molecules so they could recognize patterns and irregularities. An ingenious chemical barcoding solution was developed using a form of CRISPR technology.

CRISPR has made a lot of headlines recently in regard to gene editing. CRISPR is an enzyme that scientists have been able to “program” using targeting RNA in order to cut DNA at precise locations that the cell then repairs on its own. Reed’s team altered the chemical reaction conditions of the CRISPR enzyme so that it only sticks to the DNA and does not actually cut it.

“Because the CRISPR enzyme is a protein that’s physically bigger than the DNA molecule, it’s perfect for this barcoding application,” Reed said. “We were amazed to discover this method is nearly 90 percent efficient at bonding to the DNA molecules. And because it’s easy to see the CRISPR proteins, you can spot genetic mutations among the patterns in DNA.”

To demonstrate the technique’s effectiveness, the researchers mapped genetic translocations present in lymph node biopsies of lymphoma patients. Translocations occur when one section of the DNA gets copied and pasted to the wrong place in the genome. They are especially prevalent in blood cancers such as lymphoma but occur in other cancers as well.

While there are many potential uses for this technology, Reed and his team are focusing on medical applications. They are currently developing software based on existing algorithms that can analyze patterns in sections of DNA up to and over a million base pairs in size. Once completed, it would not be hard to imagine this shoebox-sized instrument in pathology labs assisting in the diagnosis and treatment of diseases linked to genetic mutations.

Here’s a link to and a citation for the paper,

DNA nanomapping using CRISPR-Cas9 as a programmable nanoparticle by Andrey Mikheikin, Anita Olsen, Kevin Leslie, Freddie Russell-Pavier, Andrew Yacoot, Loren Picco, Oliver Payton, Amir Toor, Alden Chesney, James K. Gimzewski, Bud Mishra, & Jason Reed. Nature Communications 8, Article number: 1665 (2017) doi:10.1038/s41467-017-01891-9 Published online: 21 November 2017

This paper is open access.

Alberta adds a newish quantum nanotechnology research hub to the Canada’s quantum computing research scene

One of the winners in Canada’s 2017 federal budget announcement of the Pan-Canadian Artificial Intelligence Strategy was Edmonton, Alberta. It’s a fact which sometimes goes unnoticed while Canadians marvel at the wonderfulness found in Toronto and Montréal where it seems new initiatives and monies are being announced on a weekly basis (I exaggerate) for their AI (artificial intelligence) efforts.

Alberta’s quantum nanotechnology hub (graduate programme)

Intriguingly, it seems that Edmonton has higher aims than (an almost unnoticed) leadership in AI. Physicists at the University of Alberta have announced hopes to be just as successful as their AI brethren in a Nov. 27, 2017 article by Juris Graney for the Edmonton Journal,

Physicists at the University of Alberta [U of A] are hoping to emulate the success of their artificial intelligence studying counterparts in establishing the city and the province as the nucleus of quantum nanotechnology research in Canada and North America.

Google’s artificial intelligence research division DeepMind announced in July [2017] it had chosen Edmonton as its first international AI research lab, based on a long-running partnership with the U of A’s 10-person AI lab.

Retaining the brightest minds in the AI and machine-learning fields while enticing a global tech leader to Alberta was heralded as a coup for the province and the university.

It is something U of A physics professor John Davis believes the university’s new graduate program, Quanta, can help achieve in the world of quantum nanotechnology.

The field of quantum mechanics had long been a realm of theoretical science based on the theory that atomic and subatomic material like photons or electrons behave both as particles and waves.

“When you get right down to it, everything has both behaviours (particle and wave) and we can pick and choose certain scenarios which one of those properties we want to use,” he said.

But, Davis said, physicists and scientists are “now at the point where we understand quantum physics and are developing quantum technology to take to the marketplace.”

“Quantum computing used to be realm of science fiction, but now we’ve figured it out, it’s now a matter of engineering,” he said.

Quantum computing labs are being bought by large tech companies such as Google, IBM and Microsoft because they realize they are only a few years away from having this power, he said.

Those making the groundbreaking developments may want to commercialize their finds and take the technology to market and that is where Quanta comes in.

East vs. West—Again?

Ivan Semeniuk in his article, Quantum Supremacy, ignores any quantum research effort not located in either Waterloo, Ontario or metro Vancouver, British Columbia to describe a struggle between the East and the West (a standard Canadian trope). From Semeniuk’s Oct. 17, 2017 quantum article [link follows the excerpts] for the Globe and Mail’s October 2017 issue of the Report on Business (ROB),

 Lazaridis [Mike], of course, has experienced lost advantage first-hand. As co-founder and former co-CEO of Research in Motion (RIM, now called Blackberry), he made the smartphone an indispensable feature of the modern world, only to watch rivals such as Apple and Samsung wrest away Blackberry’s dominance. Now, at 56, he is engaged in a high-stakes race that will determine who will lead the next technology revolution. In the rolling heartland of southwestern Ontario, he is laying the foundation for what he envisions as a new Silicon Valley—a commercial hub based on the promise of quantum technology.

Semeniuk skips over the story of how Blackberry lost its advantage. I came onto that story late in the game when Blackberry was already in serious trouble due to a failure to recognize that the field they helped to create was moving in a new direction. If memory serves, they were trying to keep their technology wholly proprietary which meant that developers couldn’t easily create apps to extend the phone’s features. Blackberry also fought a legal battle in the US with a patent troll draining company resources and energy in proved to be a futile effort.

Since then Lazaridis has invested heavily in quantum research. He gave the University of Waterloo a serious chunk of money as they named their Quantum Nano Centre (QNC) after him and his wife, Ophelia (you can read all about it in my Sept. 25, 2012 posting about the then new centre). The best details for Lazaridis’ investments in Canada’s quantum technology are to be found on the Quantum Valley Investments, About QVI, History webpage,

History-bannerHistory has repeatedly demonstrated the power of research in physics to transform society.  As a student of history and a believer in the power of physics, Mike Lazaridis set out in 2000 to make real his bold vision to establish the Region of Waterloo as a world leading centre for physics research.  That is, a place where the best researchers in the world would come to do cutting-edge research and to collaborate with each other and in so doing, achieve transformative discoveries that would lead to the commercialization of breakthrough  technologies.

Establishing a World Class Centre in Quantum Research:

The first step in this regard was the establishment of the Perimeter Institute for Theoretical Physics.  Perimeter was established in 2000 as an independent theoretical physics research institute.  Mike started Perimeter with an initial pledge of $100 million (which at the time was approximately one third of his net worth).  Since that time, Mike and his family have donated a total of more than $170 million to the Perimeter Institute.  In addition to this unprecedented monetary support, Mike also devotes his time and influence to help lead and support the organization in everything from the raising of funds with government and private donors to helping to attract the top researchers from around the globe to it.  Mike’s efforts helped Perimeter achieve and grow its position as one of a handful of leading centres globally for theoretical research in fundamental physics.

Stephen HawkingPerimeter is located in a Governor-General award winning designed building in Waterloo.  Success in recruiting and resulting space requirements led to an expansion of the Perimeter facility.  A uniquely designed addition, which has been described as space-ship-like, was opened in 2011 as the Stephen Hawking Centre in recognition of one of the most famous physicists alive today who holds the position of Distinguished Visiting Research Chair at Perimeter and is a strong friend and supporter of the organization.

Recognizing the need for collaboration between theorists and experimentalists, in 2002, Mike applied his passion and his financial resources toward the establishment of The Institute for Quantum Computing at the University of Waterloo.  IQC was established as an experimental research institute focusing on quantum information.  Mike established IQC with an initial donation of $33.3 million.  Since that time, Mike and his family have donated a total of more than $120 million to the University of Waterloo for IQC and other related science initiatives.  As in the case of the Perimeter Institute, Mike devotes considerable time and influence to help lead and support IQC in fundraising and recruiting efforts.  Mike’s efforts have helped IQC become one of the top experimental physics research institutes in the world.

Quantum ComputingMike and Doug Fregin have been close friends since grade 5.  They are also co-founders of BlackBerry (formerly Research In Motion Limited).  Doug shares Mike’s passion for physics and supported Mike’s efforts at the Perimeter Institute with an initial gift of $10 million.  Since that time Doug has donated a total of $30 million to Perimeter Institute.  Separately, Doug helped establish the Waterloo Institute for Nanotechnology at the University of Waterloo with total gifts for $29 million.  As suggested by its name, WIN is devoted to research in the area of nanotechnology.  It has established as an area of primary focus the intersection of nanotechnology and quantum physics.

With a donation of $50 million from Mike which was matched by both the Government of Canada and the province of Ontario as well as a donation of $10 million from Doug, the University of Waterloo built the Mike & Ophelia Lazaridis Quantum-Nano Centre, a state of the art laboratory located on the main campus of the University of Waterloo that rivals the best facilities in the world.  QNC was opened in September 2012 and houses researchers from both IQC and WIN.

Leading the Establishment of Commercialization Culture for Quantum Technologies in Canada:

In the Research LabFor many years, theorists have been able to demonstrate the transformative powers of quantum mechanics on paper.  That said, converting these theories to experimentally demonstrable discoveries has, putting it mildly, been a challenge.  Many naysayers have suggested that achieving these discoveries was not possible and even the believers suggested that it could likely take decades to achieve these discoveries.  Recently, a buzz has been developing globally as experimentalists have been able to achieve demonstrable success with respect to Quantum Information based discoveries.  Local experimentalists are very much playing a leading role in this regard.  It is believed by many that breakthrough discoveries that will lead to commercialization opportunities may be achieved in the next few years and certainly within the next decade.

Recognizing the unique challenges for the commercialization of quantum technologies (including risk associated with uncertainty of success, complexity of the underlying science and high capital / equipment costs) Mike and Doug have chosen to once again lead by example.  The Quantum Valley Investment Fund will provide commercialization funding, expertise and support for researchers that develop breakthroughs in Quantum Information Science that can reasonably lead to new commercializable technologies and applications.  Their goal in establishing this Fund is to lead in the development of a commercialization infrastructure and culture for Quantum discoveries in Canada and thereby enable such discoveries to remain here.

Semeniuk goes on to set the stage for Waterloo/Lazaridis vs. Vancouver (from Semeniuk’s 2017 ROB article),

… as happened with Blackberry, the world is once again catching up. While Canada’s funding of quantum technology ranks among the top five in the world, the European Union, China, and the US are all accelerating their investments in the field. Tech giants such as Google [also known as Alphabet], Microsoft and IBM are ramping up programs to develop companies and other technologies based on quantum principles. Meanwhile, even as Lazaridis works to establish Waterloo as the country’s quantum hub, a Vancouver-area company has emerged to challenge that claim. The two camps—one methodically focused on the long game, the other keen to stake an early commercial lead—have sparked an East-West rivalry that many observers of the Canadian quantum scene are at a loss to explain.

Is it possible that some of the rivalry might be due to an influential individual who has invested heavily in a ‘quantum valley’ and has a history of trying to ‘own’ a technology?

Getting back to D-Wave Systems, the Vancouver company, I have written about them a number of times (particularly in 2015; for the full list: input D-Wave into the blog search engine). This June 26, 2015 posting includes a reference to an article in The Economist magazine about D-Wave’s commercial opportunities while the bulk of the posting is focused on a technical breakthrough.

Semeniuk offers an overview of the D-Wave Systems story,

D-Wave was born in 1999, the same year Lazaridis began to fund quantum science in Waterloo. From the start, D-Wave had a more immediate goal: to develop a new computer technology to bring to market. “We didn’t have money or facilities,” says Geordie Rose, a physics PhD who co0founded the company and served in various executive roles. …

The group soon concluded that the kind of machine most scientists were pursing based on so-called gate-model architecture was decades away from being realized—if ever. …

Instead, D-Wave pursued another idea, based on a principle dubbed “quantum annealing.” This approach seemed more likely to produce a working system, even if the application that would run on it were more limited. “The only thing we cared about was building the machine,” says Rose. “Nobody else was trying to solve the same problem.”

D-Wave debuted its first prototype at an event in California in February 2007 running it through a few basic problems such as solving a Sudoku puzzle and finding the optimal seating plan for a wedding reception. … “They just assumed we were hucksters,” says Hilton [Jeremy Hilton, D.Wave senior vice-president of systems]. Federico Spedalieri, a computer scientist at the University of Southern California’s [USC} Information Sciences Institute who has worked with D-Wave’s system, says the limited information the company provided about the machine’s operation provoked outright hostility. “I think that played against them a lot in the following years,” he says.

It seems Lazaridis is not the only one who likes to hold company information tightly.

Back to Semeniuk and D-Wave,

Today [October 2017], the Los Alamos National Laboratory owns a D-Wave machine, which costs about $15million. Others pay to access D-Wave systems remotely. This year , for example, Volkswagen fed data from thousands of Beijing taxis into a machine located in Burnaby [one of the municipalities that make up metro Vancouver] to study ways to optimize traffic flow.

But the application for which D-Wave has the hights hope is artificial intelligence. Any AI program hings on the on the “training” through which a computer acquires automated competence, and the 2000Q [a D-Wave computer] appears well suited to this task. …

Yet, for all the buzz D-Wave has generated, with several research teams outside Canada investigating its quantum annealing approach, the company has elicited little interest from the Waterloo hub. As a result, what might seem like a natural development—the Institute for Quantum Computing acquiring access to a D-Wave machine to explore and potentially improve its value—has not occurred. …

I am particularly interested in this comment as it concerns public funding (from Semeniuk’s article),

Vern Brownell, a former Goldman Sachs executive who became CEO of D-Wave in 2009, calls the lack of collaboration with Waterloo’s research community “ridiculous,” adding that his company’s efforts to establish closer ties have proven futile, “I’ll be blunt: I don’t think our relationship is good enough,” he says. Brownell also point out that, while  hundreds of millions in public funds have flowed into Waterloo’s ecosystem, little funding is available for  Canadian scientists wishing to make the most of D-Wave’s hardware—despite the fact that it remains unclear which core quantum technology will prove the most profitable.

There’s a lot more to Semeniuk’s article but this is the last excerpt,

The world isn’t waiting for Canada’s quantum rivals to forge a united front. Google, Microsoft, IBM, and Intel are racing to develop a gate-model quantum computer—the sector’s ultimate goal. (Google’s researchers have said they will unveil a significant development early next year.) With the U.K., Australia and Japan pouring money into quantum, Canada, an early leader, is under pressure to keep up. The federal government is currently developing  a strategy for supporting the country’s evolving quantum sector and, ultimately, getting a return on its approximately $1-billion investment over the past decade [emphasis mine].

I wonder where the “approximately $1-billion … ” figure came from. I ask because some years ago MP Peter Julian asked the government for information about how much Canadian federal money had been invested in nanotechnology. The government replied with sheets of paper (a pile approximately 2 inches high) that had funding disbursements from various ministries. Each ministry had its own method with different categories for listing disbursements and the titles for the research projects were not necessarily informative for anyone outside a narrow specialty. (Peter Julian’s assistant had kindly sent me a copy of the response they had received.) The bottom line is that it would have been close to impossible to determine the amount of federal funding devoted to nanotechnology using that data. So, where did the $1-billion figure come from?

In any event, it will be interesting to see how the Council of Canadian Academies assesses the ‘quantum’ situation in its more academically inclined, “The State of Science and Technology and Industrial Research and Development in Canada,” when it’s released later this year (2018).

Finally, you can find Semeniuk’s October 2017 article here but be aware it’s behind a paywall.

Whither we goest?

Despite any doubts one might have about Lazaridis’ approach to research and technology, his tremendous investment and support cannot be denied. Without him, Canada’s quantum research efforts would be substantially less significant. As for the ‘cowboys’ in Vancouver, it takes a certain temperament to found a start-up company and it seems the D-Wave folks have more in common with Lazaridis than they might like to admit. As for the Quanta graduate  programme, it’s early days yet and no one should ever count out Alberta.

Meanwhile, one can continue to hope that a more thoughtful approach to regional collaboration will be adopted so Canada can continue to blaze trails in the field of quantum research.