Tag Archives: internet of things (IoT)

China’s neuromorphic chips: Darwin and Tianjic

I believe that China has more than two neuromorphic chips. The two being featured here are the ones for which I was easily able to find information.

The Darwin chip

The first information (that I stumbled across) about China and a neuromorphic chip (Darwin) was in a December 22, 2015 Science China Press news release on EurekAlert,

Artificial Neural Network (ANN) is a type of information processing system based on mimicking the principles of biological brains, and has been broadly applied in application domains such as pattern recognition, automatic control, signal processing, decision support system and artificial intelligence. Spiking Neural Network (SNN) is a type of biologically-inspired ANN that perform information processing based on discrete-time spikes. It is more biologically realistic than classic ANNs, and can potentially achieve much better performance-power ratio. Recently, researchers from Zhejiang University and Hangzhou Dianzi University in Hangzhou, China successfully developed the Darwin Neural Processing Unit (NPU), a neuromorphic hardware co-processor based on Spiking Neural Networks, fabricated by standard CMOS technology.

With the rapid development of the Internet-of-Things and intelligent hardware systems, a variety of intelligent devices are pervasive in today’s society, providing many services and convenience to people’s lives, but they also raise challenges of running complex intelligent algorithms on small devices. Sponsored by the college of Computer science of Zhejiang University, the research group led by Dr. De Ma from Hangzhou Dianzi university and Dr. Xiaolei Zhu from Zhejiang university has developed a co-processor named as Darwin.The Darwin NPU aims to provide hardware acceleration of intelligent algorithms, with target application domain of resource-constrained, low-power small embeddeddevices. It has been fabricated by 180nm standard CMOS process, supporting a maximum of 2048 neurons, more than 4 million synapses and 15 different possible synaptic delays. It is highly configurable, supporting reconfiguration of SNN topology and many parameters of neurons and synapses.Figure 1 shows photos of the die and the prototype development board, which supports input/output in the form of neural spike trains via USB port.

The successful development ofDarwin demonstrates the feasibility of real-time execution of Spiking Neural Networks in resource-constrained embedded systems. It supports flexible configuration of a multitude of parameters of the neural network, hence it can be used to implement different functionalities as configured by the user. Its potential applications include intelligent hardware systems, robotics, brain-computer interfaces, and others.Since it uses spikes for information processing and transmission,similar to biological neural networks, it may be suitable for analysis and processing of biological spiking neural signals, and building brain-computer interface systems by interfacing with animal or human brains. As a prototype application in Brain-Computer Interfaces, Figure 2 [not included here] describes an application example ofrecognizingthe user’s motor imagery intention via real-time decoding of EEG signals, i.e., whether he is thinking of left or right, and using it to control the movement direction of a basketball in the virtual environment. Different from conventional EEG signal analysis algorithms, the input and output to Darwin are both neural spikes: the input is spike trains that encode EEG signals; after processing by the neural network, the output neuron with the highest firing rate is chosen as the classification result.

The most recent development for this chip was announced in a September 2, 2019 Zhejiang University press release (Note: Links have been removed),

The second generation of the Darwin Neural Processing Unit (Darwin NPU 2) as well as its corresponding toolchain and micro-operating system was released in Hangzhou recently. This research was led by Zhejiang University, with Hangzhou Dianzi University and Huawei Central Research Institute participating in the development and algorisms of the chip. The Darwin NPU 2 can be primarily applied to smart Internet of Things (IoT). It can support up to 150,000 neurons and has achieved the largest-scale neurons on a nationwide basis.

The Darwin NPU 2 is fabricated by standard 55nm CMOS technology. Every “neuromorphic” chip is made up of 576 kernels, each of which can support 256 neurons. It contains over 10 million synapses which can construct a powerful brain-inspired computing system.

“A brain-inspired chip can work like the neurons inside a human brain and it is remarkably unique in image recognition, visual and audio comprehension and naturalistic language processing,” said MA De, an associate professor at the College of Computer Science and Technology on the research team.

“In comparison with traditional chips, brain-inspired chips are more adept at processing ambiguous data, say, perception tasks. Another prominent advantage is their low energy consumption. In the process of information transmission, only those neurons that receive and process spikes will be activated while other neurons will stay dormant. In this case, energy consumption can be extremely low,” said Dr. ZHU Xiaolei at the School of Microelectronics.

To cater to the demands for voice business, Huawei Central Research Institute designed an efficient spiking neural network algorithm in accordance with the defining feature of the Darwin NPU 2 architecture, thereby increasing computing speeds and improving recognition accuracy tremendously.

Scientists have developed a host of applications, including gesture recognition, image recognition, voice recognition and decoding of electroencephalogram (EEG) signals, on the Darwin NPU 2 and reduced energy consumption by at least two orders of magnitude.

In comparison with the first generation of the Darwin NPU which was developed in 2015, the Darwin NPU 2 has escalated the number of neurons by two orders of magnitude from 2048 neurons and augmented the flexibility and plasticity of the chip configuration, thus expanding the potential for applications appreciably. The improvement in the brain-inspired chip will bring in its wake the revolution of computer technology and artificial intelligence. At present, the brain-inspired chip adopts a relatively simplified neuron model, but neurons in a real brain are far more sophisticated and many biological mechanisms have yet to be explored by neuroscientists and biologists. It is expected that in the not-too-distant future, a fascinating improvement on the Darwin NPU 2 will come over the horizon.

I haven’t been able to find a recent (i.e., post 2017) research paper featuring Darwin but there is another chip and research on that one was published in July 2019. First, the news.

The Tianjic chip

A July 31, 2019 article in the New York Times by Cade Metz describes the research and offers what seems to be a jaundiced perspective about the field of neuromorphic computing (Note: A link has been removed),

As corporate giants like Ford, G.M. and Waymo struggle to get their self-driving cars on the road, a team of researchers in China is rethinking autonomous transportation using a souped-up bicycle.

This bike can roll over a bump on its own, staying perfectly upright. When the man walking just behind it says “left,” it turns left, angling back in the direction it came.

It also has eyes: It can follow someone jogging several yards ahead, turning each time the person turns. And if it encounters an obstacle, it can swerve to the side, keeping its balance and continuing its pursuit.

… Chinese researchers who built the bike believe it demonstrates the future of computer hardware. It navigates the world with help from what is called a neuromorphic chip, modeled after the human brain.

Here’s a video, released by the researchers, demonstrating the chip’s abilities,

Now back to back to Metz’s July 31, 2019 article (Note: A link has been removed),

The short video did not show the limitations of the bicycle (which presumably tips over occasionally), and even the researchers who built the bike admitted in an email to The Times that the skills on display could be duplicated with existing computer hardware. But in handling all these skills with a neuromorphic processor, the project highlighted the wider effort to achieve new levels of artificial intelligence with novel kinds of chips.

This effort spans myriad start-up companies and academic labs, as well as big-name tech companies like Google, Intel and IBM. And as the Nature paper demonstrates, the movement is gaining significant momentum in China, a country with little experience designing its own computer processors, but which has invested heavily in the idea of an “A.I. chip.”

If you can get past what seems to be a patronizing attitude, there are some good explanations and cogent criticisms in the piece (Metz’s July 31, 2019 article, Note: Links have been removed),

… it faces significant limitations.

A neural network doesn’t really learn on the fly. Engineers train a neural network for a particular task before sending it out into the real world, and it can’t learn without enormous numbers of examples. OpenAI, a San Francisco artificial intelligence lab, recently built a system that could beat the world’s best players at a complex video game called Dota 2. But the system first spent months playing the game against itself, burning through millions of dollars in computing power.

Researchers aim to build systems that can learn skills in a manner similar to the way people do. And that could require new kinds of computer hardware. Dozens of companies and academic labs are now developing chips specifically for training and operating A.I. systems. The most ambitious projects are the neuromorphic processors, including the Tianjic chip under development at Tsinghua University in China.

Such chips are designed to imitate the network of neurons in the brain, not unlike a neural network but with even greater fidelity, at least in theory.

Neuromorphic chips typically include hundreds of thousands of faux neurons, and rather than just processing 1s and 0s, these neurons operate by trading tiny bursts of electrical signals, “firing” or “spiking” only when input signals reach critical thresholds, as biological neurons do.

Tiernan Ray’s August 3, 2019 article about the chip for ZDNet.com offers some thoughtful criticism with a side dish of snark (Note: Links have been removed),

Nature magazine’s cover story [July 31, 2019] is about a Chinese chip [Tianjic chip]that can run traditional deep learning code and also perform “neuromorophic” operations in the same circuitry. The work’s value seems obscured by a lot of hype about “artificial general intelligence” that has no real justification.

The term “artificial general intelligence,” or AGI, doesn’t actually refer to anything, at this point, it is merely a placeholder, a kind of Rorschach Test for people to fill the void with whatever notions they have of what it would mean for a machine to “think” like a person.

Despite that fact, or perhaps because of it, AGI is an ideal marketing term to attach to a lot of efforts in machine learning. Case in point, a research paper featured on the cover of this week’s Nature magazine about a new kind of computer chip developed by researchers at China’s Tsinghua University that could “accelerate the development of AGI,” they claim.

The chip is a strange hybrid of approaches, and is intriguing, but the work leaves unanswered many questions about how it’s made, and how it achieves what researchers claim of it. And some longtime chip observers doubt the impact will be as great as suggested.

“This paper is an example of the good work that China is doing in AI,” says Linley Gwennap, longtime chip-industry observer and principal analyst with chip analysis firm The Linley Group. “But this particular idea isn’t going to take over the world.”

The premise of the paper, “Towards artificial general intelligence with hybrid Tianjic chip architecture,” is that to achieve AGI, computer chips need to change. That’s an idea supported by fervent activity these days in the land of computer chips, with lots of new chip designs being proposed specifically for machine learning.

The Tsinghua authors specifically propose that the mainstream machine learning of today needs to be merged in the same chip with what’s called “neuromorphic computing.” Neuromorphic computing, first conceived by Caltech professor Carver Mead in the early ’80s, has been an obsession for firms including IBM for years, with little practical result.

[Missing details about the chip] … For example, the part is said to have “reconfigurable” circuits, but how the circuits are to be reconfigured is never specified. It could be so-called “field programmable gate array,” or FPGA, technology or something else. Code for the project is not provided by the authors as it often is for such research; the authors offer to provide the code “on reasonable request.”

More important is the fact the chip may have a hard time stacking up to a lot of competing chips out there, says analyst Gwennap. …

What the paper calls ANN and SNN are two very different means of solving similar problems, kind of like rotating (helicopter) and fixed wing (airplane) are for aviation,” says Gwennap. “Ultimately, I expect ANN [?] and SNN [spiking neural network] to serve different end applications, but I don’t see a need to combine them in a single chip; you just end up with a chip that is OK for two things but not great for anything.”

But you also end up generating a lot of buzz, and given the tension between the U.S. and China over all things tech, and especially A.I., the notion China is stealing a march on the U.S. in artificial general intelligence — whatever that may be — is a summer sizzler of a headline.

ANN could be either artificial neural network or something mentioned earlier in Ray’s article, a shortened version of CANN [continuous attractor neural network].

Shelly Fan’s August 7, 2019 article for the SingularityHub is almost as enthusiastic about the work as the podcasters for Nature magazine  were (a little more about that later),

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

BTW, Fan is a neuroscientist (from her SingularityHub profile page),

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF [University of California at San Francisco] to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, “Will AI Replace Us?” (Thames & Hudson) will be out April 2019.

Onto Nature. Here’s a link to and a citation for the paper,

Towards artificial general intelligence with hybrid Tianjic chip architecture by Jing Pei, Lei Deng, Sen Song, Mingguo Zhao, Youhui Zhang, Shuang Wu, Guanrui Wang, Zhe Zou, Zhenzhi Wu, Wei He, Feng Chen, Ning Deng, Si Wu, Yu Wang, Yujie Wu, Zheyu Yang, Cheng Ma, Guoqi Li, Wentao Han, Huanglong Li, Huaqiang Wu, Rong Zhao, Yuan Xie & Luping Shi. Nature volume 572, pages106–111(2019) DOI: https//doi.org/10.1038/s41586-019-1424-8 Published: 31 July 2019 Issue Date: 01 August 2019

This paper is behind a paywall.

The July 31, 2019 Nature podcast, which includes a segment about the Tianjic chip research from China, which is at the 9 mins. 13 secs. mark (AI hardware) or you can scroll down about 55% of the way to the transcript of the interview with Luke Fleet, the Nature editor who dealt with the paper.

Some thoughts

The pundits put me in mind of my own reaction when I heard about phones that could take pictures. I didn’t see the point but, as it turned out, there was a perfectly good reason for combining what had been two separate activities into one device. It was no longer just a telephone and I had completely missed the point.

This too may be the case with the Tianjic chip. I think it’s too early to say whether or not it represents a new type of chip or if it’s a dead end.

Brain-inspired electronics with organic memristors for wearable computing

I went down a rabbit hole while trying to figure out the difference between ‘organic’ memristors and standard memristors. I have put the results of my investigation at the end of this post. First, there’s the news.

An April 21, 2020 news item on ScienceDaily explains why researchers are so focused on memristors and brainlike computing,

The advent of artificial intelligence, machine learning and the internet of things is expected to change modern electronics and bring forth the fourth Industrial Revolution. The pressing question for many researchers is how to handle this technological revolution.

“It is important for us to understand that the computing platforms of today will not be able to sustain at-scale implementations of AI algorithms on massive datasets,” said Thirumalai Venkatesan, one of the authors of a paper published in Applied Physics Reviews, from AIP Publishing.

“Today’s computing is way too energy-intensive to handle big data. We need to rethink our approaches to computation on all levels: materials, devices and architecture that can enable ultralow energy computing.”

An April 21, 2020 American Institute of Physics (AIP) news release (also on EurekAlert), which originated the news item, describes the authors’ approach to the problems with organic memristors,

Brain-inspired electronics with organic memristors could offer a functionally promising and cost- effective platform, according to Venkatesan. Memristive devices are electronic devices with an inherent memory that are capable of both storing data and performing computation. Since memristors are functionally analogous to the operation of neurons, the computing units in the brain, they are optimal candidates for brain-inspired computing platforms.

Until now, oxides have been the leading candidate as the optimum material for memristors. Different material systems have been proposed but none have been successful so far.

“Over the last 20 years, there have been several attempts to come up with organic memristors, but none of those have shown any promise,” said Sreetosh Goswami, lead author on the paper. “The primary reason behind this failure is their lack of stability, reproducibility and ambiguity in mechanistic understanding. At a device level, we are now able to solve most of these problems,”

This new generation of organic memristors is developed based on metal azo complex devices, which are the brainchild of Sreebata Goswami, a professor at the Indian Association for the Cultivation of Science in Kolkata and another author on the paper.

“In thin films, the molecules are so robust and stable that these devices can eventually be the right choice for many wearable and implantable technologies or a body net, because these could be bendable and stretchable,” said Sreebata Goswami. A body net is a series of wireless sensors that stick to the skin and track health.

The next challenge will be to produce these organic memristors at scale, said Venkatesan.

“Now we are making individual devices in the laboratory. We need to make circuits for large-scale functional implementation of these devices.”

Caption: The device structure at a molecular level. The gold nanoparticles on the bottom electrode enhance the field enabling an ultra-low energy operation of the molecular device. Credit Sreetosh Goswami, Sreebrata Goswami and Thirumalai Venky Venkatesan

Here’s a link to and a citation for the paper,

An organic approach to low energy memory and brain inspired electronics by Sreetosh Goswami, Sreebrata Goswami, and T. Venkatesan. Applied Physics Reviews 7, 021303 (2020) DOI: https://doi.org/10.1063/1.5124155

This paper is open access.

Basics about memristors and organic memristors

This undated article on Nanowerk provides a relatively complete and technical description of memristors in general (Note: A link has been removed),

A memristor (named as a portmanteau of memory and resistor) is a non-volatile electronic memory device that was first theorized by Leon Ong Chua in 1971 as the fourth fundamental two-terminal circuit element following the resistor, the capacitor, and the inductor (IEEE Transactions on Circuit Theory, “Memristor-The missing circuit element”).

Its special property is that its resistance can be programmed (resistor function) and subsequently remains stored (memory function). Unlike other memories that exist today in modern electronics, memristors are stable and remember their state even if the device loses power.

However, it was only almost 40 years later that the first practical device was fabricated. This was in 2008, when a group led by Stanley Williams at HP Research Labs realized that switching of the resistance between a conducting and less conducting state in metal-oxide thin-film devices was showing Leon Chua’s memristor behavior. …

The article on Nanowerk includes an embedded video presentation on memristors given by Stanley Williams (also known as R. Stanley Williams).

Mention of an ‘organic’memristor can be found in an October 31, 2017 article by Ryan Whitwam,

The memristor is composed of the transition metal ruthenium complexed with “azo-aromatic ligands.” [emphasis mine] The theoretical work enabling this material was performed at Yale, and the organic molecules were synthesized at the Indian Association for the Cultivation of Sciences. …

I highlighted ‘ligands’ because that appears to be the difference. However, there is more than one type of ligand on Wikipedia.

First, there’s the Ligand (biochemistry) entry (Note: Links have been removed),

In biochemistry and pharmacology, a ligand is a substance that forms a complex with a biomolecule to serve a biological purpose. …

Then, there’s the Ligand entry,

In coordination chemistry, a ligand[help 1] is an ion or molecule (functional group) that binds to a central metal atom to form a coordination complex …

Finally, there’s the Ligand (disambiguation) entry (Note: Links have been removed),

  • Ligand, an atom, ion, or functional group that donates one or more of its electrons through a coordinate covalent bond to one or more central atoms or ions
  • Ligand (biochemistry), a substance that binds to a protein
  • a ‘guest’ in host–guest chemistry

I did take a look at the paper and did not see any references to proteins or other biomolecules that I could recognize as such. I’m not sure why the researchers are describing their device as an ‘organic’ memristor but this may reflect a shortcoming in the definitions I have found or shortcomings in my reading of the paper rather than an error on their parts.

Hopefully, more research will be forthcoming and it will be possible to better understand the terminology.

Fourth Industrial Revolution and its impact on charity organizations

Andy Levy-Ajzenkopf’s February 21, 2020 article (Technology and innovation: How the Fourth Industrial Revolution is impacting the charitable sector) for Charity Village has an ebullient approach to adoption of new and emerging technologies in the charitable sector (Note: A link has been removed),

Almost daily, new technologies are being developed to help innovate the way people give or the way organizations offer opportunities to advance their causes. There is no going back.

The charitable sector – along with society at large – is now fully in the midst of what is being called the Fourth Industrial Revolution, a term first brought to prominence among CEOs, thought leaders and policy makers at the 2016 World Economic Forum. And if you haven’t heard the phrase yet, get ready to hear it tons more as economies around the world embrace it.

To be clear, the Fourth Industrial Revolution is the newest disruption in the way our world works. When you hear someone talk about it, what they’re describing is the massive technological shift in our business and personal ecosystems that now rely heavily on things like artificial intelligence, quantum computing, 3D printing and the general “Internet of things.”

Still, now more than ever, charitable business is getting done and being advanced by sector pioneers who aren’t afraid to make use of new technologies on offer to help civil society.

It seems like everywhere one turns, the topic of artificial intelligence (A.I.) is increasingly becoming subject of choice.

This is no different in the charitable sector, and particularly so for a new company called Fundraise Wisely (aka Wisely). Its co-founder and CEO, Artiom Komarov, explains a bit about what exactly his tech is doing for the sector.

“We help accelerate fundraising, with A.I. At a product level, we connect to your CRM (content relationship management system) and predict the next gift and next gift date for every donor. We then use that information to help you populate and prioritize donor portfolios,” Komarov states.

He notes that his company is seeing increased demand for innovative technologies from charities over the last while.

“What we’re hearing is that… A.I. tech is compelling because at the end of the day it’s meant to move the bottom line, helping nonprofits grow their revenue. We’ve also found that internally [at a charitable organization] there’s always a champion that sees the potential impact of technology; and that’s a great place to start with change,” Komarov says. “If it’s done right, tech can be an enabler of better work for organizations. From both research and experience, we know that tech adoption usually fails because of culture rather than the underlying technology. We’re here to work with the client closely to help that transition.”

I would like to have seen some numbers. For example, Komarov says that AI is having a positive impact on a charity’s bottom line. So, how much money did one of these charities raise? Was it more money than they would have made without AI? Assuming they did manage to raise greater funds, could another technology been more cost effective?

For another perspective (equally positive) on technology and charity, there’s a November 29, 2012 posting (Why technology and innovation are key to increasing charity donations) on the Guardian blogs by Henna Butt and Renita Shah (Note: Links have been removed),

At the beginning of this year the [UK] Cabinet Office and Nesta [formerly National Endowment for Science, Technology and the Arts {NESTA}] announced a £10m fund to invest in innovation in giving. The first tranche of this money has already been invested in promising initiatives such as Timto which allows you to create a gift list that includes a charity donation and Pennies, whose electronic money box allows customers to donate when paying for something in a shop using a credit card. Small and sizeable organisations alike are now using web and mobile technologies to make giving more convenient, more social and more compelling.

Butt’s and Shah’s focus was on mobile technologies and social networks. Like Levy-Ajzenkopf’s article, there’s no discussion of any possible downside to these technologies, e.g., privacy issues. As well, the inevitability of this move toward more technology for charity is explicitly stated by Levy-Ajzenkopf “There is no going back” and noted less starkly by Butt and Shah “… innovation is becoming increasingly important for the success of charities.” To rephrase my concern, are we utilizing technology in our work or are we serving the needs of our technology?

Finally, for anyone who’s curious about the Fourth Industrial Revolution, I have a December 3, 2015 posting about it.

Control your electronics devices with your clothing while protecting yourself from bacteria

Purdue University researchers have developed a new fabric innovation that allows the wearer to control electronic devices through the clothing. Courtesy: Purdue University

I like the image but do they really want someone pressing a cufflink? Anyway, being able to turn on your house lights and music system with your clothing would certainly be convenient. From an August 8, 2019 Purdue University (Indiana, US) news release (also on EurekAlert) by Chris Adam,

A new addition to your wardrobe may soon help you turn on the lights and music – while also keeping you fresh, dry, fashionable, clean and safe from the latest virus that’s going around.

Purdue University researchers have developed a new fabric innovation that allows wearers to control electronic devices through clothing.

“It is the first time there is a technique capable to transform any existing cloth item or textile into a self-powered e-textile containing sensors, music players or simple illumination displays using simple embroidery without the need for expensive fabrication processes requiring complex steps or expensive equipment,” said Ramses Martinez, an assistant professor in the School of Industrial Engineering and in the Weldon School of Biomedical Engineering in Purdue’s College of Engineering.

The technology is featured in the July 25 [2019] edition of Advanced Functional Materials.

“For the first time, it is possible to fabricate textiles that can protect you from rain, stains, and bacteria while they harvest the energy of the user to power textile-based electronics,” Martinez said. “These self-powered e-textiles also constitute an important advancement in the development of wearable machine-human interfaces, which now can be washed many times in a conventional washing machine without apparent degradation.

Martinez said the Purdue waterproof, breathable and antibacterial self-powered clothing is based on omniphobic triboelectric nanogeneragtors (RF-TENGs) – which use simple embroidery and fluorinated molecules to embed small electronic components and turn a piece of clothing into a mechanism for powering devices. The Purdue team says the RF-TENG technology is like having a wearable remote control that also keeps odors, rain, stains and bacteria away from the user.

“While fashion has evolved significantly during the last centuries and has easily adopted recently developed high-performance materials, there are very few examples of clothes on the market that interact with the user,” Martinez said. “Having an interface with a machine that we are constantly wearing sounds like the most convenient approach for a seamless communication with machines and the Internet of Things.”

The technology is being patented through the Purdue Research Foundation Office of Technology Commercialization. The researchers are looking for partners to test and commercialize their technology.

Their work aligns with Purdue’s Giant Leaps celebration of the university’s global advancements in artificial intelligence and health as part of Purdue’s 150th anniversary. It is one of the four themes of the yearlong celebration’s Ideas Festival, designed to showcase Purdue as an intellectual center solving real-world issues.

Here’s a link to and a citation for the paper,

Waterproof, Breathable, and Antibacterial Self‐Powered e‐Textiles Based on Omniphobic Triboelectric Nanogenerators by Marina Sala de Medeiros, Daniela Chanci, Carolina Moreno, Debkalpa Goswami, Ramses V. Martinez. Advanced Functional Materials DOI: https://doi.org/10.1002/adfm.201904350 First published online: 25 July 2019

This paper is behind a paywall.

A deep look at atomic switches

A July 19, 2019 news item on phys.org describes research that may result in a substantive change for information technology,

A team of researchers from Tokyo Institute of Technology has gained unprecedented insight into the inner workings of an atomic switch. By investigating the composition of the tiny metal ‘bridge’ that forms inside the switch, their findings may spur the design of atomic switches with improved performance.

A July 22, 2019 Tokyo Institute of Technology press release (also on EurekAlert but published July 19, 2019), which originated the news item, explains how this research could have such an important impact,

Atomic switches are hailed as the tiniest of electrochemical switches that could change the face of information technology. Due to their nanoscale dimensions and low power consumption, they hold promise for integration into next-generation circuits that could drive the development of artificial intelligence (AI) and Internet of Things (IoT) devices.

Although various designs have emerged, one intriguing question concerns the nature of the metallic filament, or bridge, that is key to the operation of the switch. The bridge forms inside a metal sulfide layer sandwiched between two electrodes [see figure below], and is controlled by applying a voltage that induces an electrochemical reaction. The formation and annihilation of this bridge determines whether the switch is on or off.

Now, a research group including Akira Aiba and Manabu Kiguchi and colleagues at Tokyo Institute of Technology’s Department of Chemistry has found a useful way to examine precisely what the bridge is composed of.

By cooling the atomic switch enough so as to be able to investigate the bridge using a low-temperature measurement technique called point contact spectroscopy (PCS) [2], their study revealed that the bridge is made up of metal atoms from both the electrode and the metal sulfide layer. This surprising finding controverts the prevailing notion that the bridge derives from the electrode only, Kiguchi explains.

The team compared atomic switches with different combinations of electrodes (Pt and Ag, or Pt and Cu) and metal sulfide layers (Cu2S and Ag2S). In both cases, they found that the bridge is mainly composed of Ag.

The reason behind the dominance of Ag in the bridge is likely due to “the higher mobility of Ag ions compared to Cu ions”, the researchers say in their paper published in ACS Applied Materials & Interfaces.

They conclude that “it would be better to use metals with low mobility” for designing atomic switches with higher stability.

Much remains to be explored in the advancement of atomic switch technologies, and the team is continuing to investigate which combination of elements would be the most effective in improving performance.

###

Technical terms
[1] Atomic switch: The idea behind an atomic switch — one that can be controlled by the motion of a single atom — was introduced by Donald Eigler and colleagues at the IBM Almaden Research Center in 1991. Interest has since focused on how to realize and harness the potential of such extremely small switches for use in logic circuits and memory devices. Over the past two decades, researchers in Japan have taken a world-leading role in the development of atomic switch technologies.
[2] Point contact spectroscopy: A method of measuring the properties or excitations of single atoms at low temperature.

Caption: The ‘bridge’ that forms within the metal sulfide layer, connecting two metal electrodes, results in the atomic switch being turned on. Credit: Manabu Kiguchi

Here’s a link to and a citation for the paper,

Investigation of Ag and Cu Filament Formation Inside the Metal Sulfide Layer of an Atomic Switch Based on Point-Contact Spectroscopy by A. Aiba, R. Koizumi, T. Tsuruoka, K. Terabe, K. Tsukagoshi, S. Kaneko, S. Fujii, T. Nishino, M. Kiguchi. ACS Appl. Mater. Interfaces 2019 XXXXXXXXXX-XXX DOI: https://doi.org/10.1021/acsami.9b05523 Publication Date:July 5, 2019 Copyright © 2019 American Chemical Society

This paper is behind a paywall.

For anyone who might need a bit of a refresher for the chemical elements, Pt is platinum, Ag is silver, and Cu is copper. So, with regard to the metal sulfide layers Cu2S is copper sulfide and Ag2S is silver sulfide.

Thin-film electronic stickers for the Internet of Things (IoT)

This research is from Purdue University (Indiana, US) and the University of Virginia (US) increases and improves the interactivity between objects in what’s called the Internet of Things (IoT).

Caption: Electronic stickers can turn ordinary toy blocks into high-tech sensors within the ‘internet of things.’ Credit: Purdue University image/Chi Hwan Lee

From a July 16, 2018 news item on ScienceDaily,

Billions of objects ranging from smartphones and watches to buildings, machine parts and medical devices have become wireless sensors of their environments, expanding a network called the “internet of things.”

As society moves toward connecting all objects to the internet — even furniture and office supplies — the technology that enables these objects to communicate and sense each other will need to scale up.

Researchers at Purdue University and the University of Virginia have developed a new fabrication method that makes tiny, thin-film electronic circuits peelable from a surface. The technique not only eliminates several manufacturing steps and the associated costs, but also allows any object to sense its environment or be controlled through the application of a high-tech sticker.

Eventually, these stickers could also facilitate wireless communication. …

A July 16, 2018 University of Purdue news release (also on EurekAlert), which originated the news item, explains more,

“We could customize a sensor, stick it onto a drone, and send the drone to dangerous areas to detect gas leaks, for example,” said Chi Hwan Lee, Purdue assistant professor of biomedical engineering and mechanical engineering.

Most of today’s electronic circuits are individually built on their own silicon “wafer,” a flat and rigid substrate. The silicon wafer can then withstand the high temperatures and chemical etching that are used to remove the circuits from the wafer.

But high temperatures and etching damage the silicon wafer, forcing the manufacturing process to accommodate an entirely new wafer each time.

Lee’s new fabrication technique, called “transfer printing,” cuts down manufacturing costs by using a single wafer to build a nearly infinite number of thin films holding electronic circuits. Instead of high temperatures and chemicals, the film can peel off at room temperature with the energy-saving help of simply water.

“It’s like the red paint on San Francisco’s Golden Gate Bridge – paint peels because the environment is very wet,” Lee said. “So in our case, submerging the wafer and completed circuit in water significantly reduces the mechanical peeling stress and is environmentally-friendly.”

A ductile metal layer, such as nickel, inserted between the electronic film and the silicon wafer, makes the peeling possible in water. These thin-film electronics can then be trimmed and pasted onto any surface, granting that object electronic features.

Putting one of the stickers on a flower pot, for example, made that flower pot capable of sensing temperature changes that could affect the plant’s growth.

Lee’s lab also demonstrated that the components of electronic integrated circuits work just as well before and after they were made into a thin film peeled from a silicon wafer. The researchers used one film to turn on and off an LED light display.

“We’ve optimized this process so that we can delaminate electronic films from wafers in a defect-free manner,” Lee said.

This technology holds a non-provisional U.S. patent. The work was supported by the Purdue Research Foundation, the Air Force Research Laboratory (AFRL-S-114-054-002), the National Science Foundation (NSF-CMMI-1728149) and the University of Virginia.

The researchers have provided a video,

Here’s a link to and a citation for the paper,

Wafer-recyclable, environment-friendly transfer printing for large-scale thin-film nanoelectronics by Dae Seung Wie, Yue Zhang, Min Ku Kim, Bongjoong Kim, Sangwook Park, Young-Joon Kim, Pedro P. Irazoqui, Xiaolin Zheng, Baoxing Xu, and Chi Hwan Lee.
PNAS July 16, 2018 201806640 DOI: https://doi.org/10.1073/pnas.1806640115
published ahead of print July 16, 2018

This paper is behind a paywall.

Dexter Johnson provides some context in his July 25, 2018 posting on the Nanoclast blog (on the IEEE [Institute of Electronic and Electrical Engineers] website), Note: A link has been removed,

The Internet of Things (IoT), the interconnection of billions of objects and devices that will be communicating with each other, has been the topic of many futurists’ projections. However, getting the engineering sorted out with the aim of fully realizing the myriad visions for IoT is another story. One key issue to address: How do you get the electronics onto these devices efficiently and economically?

A team of researchers from Purdue University and the University of Virginia has developed a new manufacturing process that could make equipping a device with all the sensors and other electronics that will make it Internet capable as easily as putting a piece of tape on it.

… this new approach makes use of a water environment at room temperature to control the interfacial debonding process. This allows clean, intact delamination of prefabricated thin film devices when they’re pulled away from the original wafer.

The use of mechanical peeling in water rather than etching solution provides a number of benefits in the manufacturing scheme. Among them are simplicity, controllability, and cost effectiveness, says Chi Hwan Lee, assistant professor at Purdue University and coauthor of the paper chronicling the research.

If you have the time, do read Dexter’s piece. He always adds something that seems obvious in retrospect but wasn’t until he wrote it.

Call for abstracts: Seventh annual conference on governance of emerging technologies & science (GETS)

The conference itself will be held from May 22 – 24, 2019 at Arizona State University (ASU) and the deadline for abstracts is January 31, 2019. Here’s the news straight from the January 8, 2019 email announcement,

The Seventh Annual Conference on Governance of Emerging Technologies & Science (GETS)

May 22-24, 2019 / ASU / Sandra Day O’Connor College of Law
111 E. Taylor St., Phoenix, AZ
 
The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, digital health, human enhancement, artificial intelligence, virtual reality, internet of things (IoT), blockchain and much, much more!
 
Submit Your Abstract Here: 2019 Abstract
or
Conference Website
 
Call for abstracts:
 
The co-sponsors invite submission of abstracts for proposed presentations. Submitters of abstracts need not provide a written paper, although provision will be made for posting and possible post-conference publication of papers for those who are interested. 
Abstracts are invited for any aspect or topic relating to the governance of emerging technologies, including any of the technologies listed above.
 
·         Abstracts should not exceed 500 words and must contain your name and email address.
·         Abstracts must be submitted by January 31, 2019 to be considered. 
·         The sponsors will pay for the conference registration (including all conference meals and events) for one presenter for each accepted abstract. In addition, we will have limited funds available for travel subsidies (application included in submission form).
For more informationcontact our Executive Director Josh Abbott at Josh.Abbott@asu.edu.

Good luck on your submission!

Media registration is open for the 2018 ITU ( International Telecommunication Union) Plenipotentiary Conference (PP-18) being held 29 October – 16 November 2018 in Dubai

I’m a little late with this but there’s still time to register should you happen to be in or able to get to Dubai easily. From an October 18, 2018 International Telecommunication Union (ITU) Media Advisory (received via email),

Media registration is open for the 2018 ITU Plenipotentiary Conference (PP-18) – the highest policy-making body of the International Telecommunication Union (ITU), the United Nations’ specialized agency for information and communication technology. This will be closing soon, so all media intending to attend the event MUST register as soon as possible here.

Held every four years, it is the key event at which ITU’s 193 Member States decide on the future role of the organization, thereby determining ITU’s ability to influence and affect the development of information and communication technologies (ICTs) worldwide. It is expected to attract around 3,000 participants, including Heads of State and an estimated 130 VIPs from more than 193 Member States and more than 800 private companies, academic institutions and national, regional and international bodies.

ITU plays an integral role in enabling the development and implementation of ICTs worldwide through its mandate to: coordinate the shared global use of the radio spectrum, promote international cooperation in assigning satellite orbits, work to improve communication infrastructure in the developing world, and establish worldwide standards that foster seamless interconnection of a vast range of communications systems.

Delegates will tackle a number of pressing issues, from strategies to promote digital inclusion and bridge the digital divide, to ways to leverage such emerging technologies as the Internet of Things, Artificial Intelligence, 5G, and others, to improve the way all of us, everywhere, live and work.

The conference also sets ITU’s Financial Plan and elects its five top executives – Secretary-General, Deputy Secretary-General, and the Directors of the Radiocommunication, Telecommunication Standardization and Telecommunication Development Bureaux – who will guide its work over the next four years.

What: ITU Plenipotentiary Conference 2018 (PP-18) sets the next four-year strategy, budget and leadership of ITU.

Why: Finance, Business, Tech, Development and Foreign Affairs reporters will find PP-18 relevant to their newsgathering. Decisions made at PP-18 are designed to create an enabling ICT environment where the benefits of digital connectivity can reach all people and economies, everywhere. As such, these decisions can have an impact on the telecommunication and technology sectors as well as developed and developing countries alike.

When: 29 October – 16 November 2018: With several Press Conferences planned during the event.

* Historically the Opening, Closing and Plenary sessions of this conference are open to media. Confirmation of those sessions open to media, and Press Conference times, will be made closer to the event date.

Where: Dubai World Trade Center, Dubai, United Arab Emirates

More Information:

REGISTER FOR ACCREDITATION

I visited the ‘ITU Events Registration and Accreditation Process for Media‘ webpage and foudn these tidbits,

Accreditation eligibility & credentials 

1. Journalists* should provide an official letter of assignment from the Editor-in-Chief (or the News Editor for radio/TV). One letter per crew/editorial team will suffice. Editors-in-Chief and Bureau Chiefs should submit a letter from their Director. Please email this to pressreg@itu.int, along with the required supporting credentials below:​

    • ​​​​​print and online publications should be available to the general public and published at least 6 times a year by an organization whose principle business activity is publishing and which generally carries paid advertising;

      o 2 copies of recent byline articles published within the last 4 months.
    • news wire services should provide news coverage to subscribers, including newspapers, periodicals and/or television networks;

      o 2 copies of recent byline articles or broadcasting material published within the last 4 months.
    • broadcast should provide news and information programmes to the general public. Independent film and video production companies can only be accredited if officially mandated by a broadcast station via a letter of assignment;

      o broadcasting material published within the last 4 months.
    • freelance journalists including photographers, must provide clear documentation that they are on assignment from a specific news organization or publication. Evidence that they regularly supply journalistic content to recognized media may be acceptable in the absence of an assignment letter at the discretion of the ITU Media Relations Service.

      o a valid assignment letter from the news organization or publication.

 2. Bloggers may be granted accreditation if blog content is deemed relevant to the industry, contains news commentary, is regularly updated and made publicly available. Corporate bloggers are invited to register as participants. Please see Guidelines for Blogger Accreditation below for more details.

Guidelines for Blogger Accreditation

ITU is committed to working with independent ‘new media’ reporters and columnists who reach their audiences via blogs, podcasts, video blogs and other online media. These are the guidelines we use to determine whether to issue official media accreditation to independent online media representatives: 

ITU reserves the right to request traffic data from a third party (Sitemeter, Technorati, Feedburner, iTunes or equivalent) when considering your application. While the decision to grant access is not based solely on traffic/subscriber data, we ask that applicants provide sufficient transparency into their operations to help us make a fair and timely decision. 

Obtaining media accreditation for ITU events is an opportunity to meet and interact with key industry and political figures. While continued accreditation for ITU events is not directly contingent on producing coverage, owing to space limitations we may take this into consideration when processing future accreditation requests. Following any ITU event for which you are accredited, we therefore kindly request that you forward a link to your post/podcast/video blog to pressreg@itu.int. 

Bloggers who are granted access to ITU events are expected to act professionally. Those who do not maintain the standards expected of professional media representatives run the risk of having their accreditation withdrawn. 

If you can’t find answers to your questions on the ‘ITU Events Registration and Accreditation Process for Media‘ webpage, you can contact,

For media accreditation inquiries:


Rita Soraya Abino-Quintana
Media Accreditation Officer
ITU Corporate Communications

Tel: +41 22 730 5424

For anything else, contact,

For general media inquiries:


Jennifer Ferguson-Mitchell
Senior Media and Communications Officer
ITU Corporate Communications

Tel: +41 22 730 5469

Mobile: +41 79 337 4615

There you have it.

New breed of memristors?

This new ‘breed’ of memristor (a component in brain-like/neuromorphic computing) is a kind of thin film. First, here’s an explanation of neuromorphic computing from the Finnish researchers looking into a new kind of memristor, from a January 10, 2018 news item on Nanowerk,

The internet of things [IOT] is coming, that much we know. But still it won’t; not until we have components and chips that can handle the explosion of data that comes with IoT. In 2020, there will already be 50 billion industrial internet sensors in place all around us. A single autonomous device – a smart watch, a cleaning robot, or a driverless car – can produce gigabytes of data each day, whereas an airbus may have over 10 000 sensors in one wing alone.

Two hurdles need to be overcome. First, current transistors in computer chips must be miniaturized to the size of only few nanometres; the problem is they won’t work anymore then. Second, analysing and storing unprecedented amounts of data will require equally huge amounts of energy. Sayani Majumdar, Academy Fellow at Aalto University, along with her colleagues, is designing technology to tackle both issues.

Majumdar has with her colleagues designed and fabricated the basic building blocks of future components in what are called “neuromorphic” computers inspired by the human brain. It’s a field of research on which the largest ICT companies in the world and also the EU are investing heavily. Still, no one has yet come up with a nano-scale hardware architecture that could be scaled to industrial manufacture and use.

An Aalto University January 10, 2018 press release, which originated the news item, provides more detail about the work,

“The technology and design of neuromorphic computing is advancing more rapidly than its rival revolution, quantum computing. There is already wide speculation both in academia and company R&D about ways to inscribe heavy computing capabilities in the hardware of smart phones, tablets and laptops. The key is to achieve the extreme energy-efficiency of a biological brain and mimic the way neural networks process information through electric impulses,” explains Majumdar.

Basic components for computers that work like the brain

In their recent article in Advanced Functional Materials, Majumdar and her team show how they have fabricated a new breed of “ferroelectric tunnel junctions”, that is, few-nanometre-thick ferroelectric thin films sandwiched between two electrodes. They have abilities beyond existing technologies and bode well for energy-efficient and stable neuromorphic computing.

The junctions work in low voltages of less than five volts and with a variety of electrode materials – including silicon used in chips in most of our electronics. They also can retain data for more than 10 years without power and be manufactured in normal conditions.

Tunnel junctions have up to this point mostly been made of metal oxides and require 700 degree Celsius temperatures and high vacuums to manufacture. Ferroelectric materials also contain lead which makes them – and all our computers – a serious environmental hazard.

“Our junctions are made out of organic hydro-carbon materials and they would reduce the amount of toxic heavy metal waste in electronics. We can also make thousands of junctions a day in room temperature without them suffering from the water or oxygen in the air”, explains Majumdar.

What makes ferroelectric thin film components great for neuromorphic computers is their ability to switch between not only binary states – 0 and 1 – but a large number of intermediate states as well. This allows them to ‘memorise’ information not unlike the brain: to store it for a long time with minute amounts of energy and to retain the information they have once received – even after being switched off and on again.

We are no longer talking of transistors, but ‘memristors’. They are ideal for computation similar to that in biological brains.  Take for example the Mars 2020 Rover about to go chart the composition of another planet. For the Rover to work and process data on its own using only a single solar panel as an energy source, the unsupervised algorithms in it will need to use an artificial brain in the hardware.

“What we are striving for now, is to integrate millions of our tunnel junction memristors into a network on a one square centimetre area. We can expect to pack so many in such a small space because we have now achieved a record-high difference in the current between on and off-states in the junctions and that provides functional stability. The memristors could then perform complex tasks like image and pattern recognition and make decisions autonomously,” says Majumdar.

The probe-station device (the full instrument, left, and a closer view of the device connection, right) which measures the electrical responses of the basic components for computers mimicking the human brain. The tunnel junctions are on a thin film on the substrate plate. Photo: Tapio Reinekoski

Here’s a link to and a citation for the paper,

Electrode Dependence of Tunneling Electroresistance and Switching Stability in Organic Ferroelectric P(VDF-TrFE)-Based Tunnel Junctions by Sayani Majumdar, Binbin Chen, Qi Hang Qin, Himadri S. Majumdar, and Sebastiaan van Dijken. Advanced Functional Materials Vol. 28 Issue 2 DOI: 10.1002/adfm.201703273 Version of Record online: 27 NOV 2017

© 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim

This paper is behind a paywall.

Is technology taking our jobs? (a Women in Communications and Technology, BC Chapter event) and Brave New Work in Vancouver (Canada)

Awkwardly named as it is, the Women in Communications and Technology BC Chapter (WCTBC) has been reinvigorated after a moribund period (from a Feb. 21, 2018 posting by Rebecca Bollwitt for the Miss 604 blog),

There’s an exciting new organization and event series coming to Vancouver, which will aim to connect, inspire, and advance women in the communications and technology industries. I’m honoured to be on the Board of Directors for the newly rebooted Women in Communications and Technology, BC Chapter (“WCTBC”) and we’re ready to announce our first event!

Women in Debate: Is Technology Taking Our Jobs?

When: Tuesday, March 6, 2018 at 5:30pm
Where: BLG – 200 Burrard, 1200 Waterfront Centre, Vancouver
Tickets: Register online today. The cost is $25 for WCT members and $35 for non-members.

Automation, driven by technological progress, has been expanding for the past several decades. As the pace of development increases, so has the urgency in the debate about the potential effects of automation on jobs, employment, and human activity. Will new technology spawn mass unemployment, as the robots take jobs away from humans? Or is this part of a cycle that predates even the Industrial Revolution in which some jobs will become obsolete, while new jobs will be created?

Debaters:
Christin Wiedemann – Co-CEO, PQA Testing
Kathy Gibson – President, Catchy Consulting
Laura Sukorokoff – Senior Trainer & Communications, Hyperwallet
Sally Whitehead – Global Director, Sophos

Based on the Oxford style debates popularized by the podcast ‘Intelligence Squared’, the BC chapter of Women in Communications and Technology brings you Women in Debate: Is Technology Taking Our Jobs?

For anyone not familiar with “Intelligence Squared,”  there’s this from their About webpage,

ntelligence Squared is the world’s premier forum for debate and intelligent discussion. Live and online we take you to the heart of the issues that matter, in the company of some of the world’s sharpest minds and most exciting orators.

Intelligence Squared Live

Our events have captured the imagination of public audiences for more than a decade, welcoming the biggest names in politics, journalism and the arts. Our celebrated list of speakers includes President Jimmy Carter, Stephen Fry, Patti Smith, Richard Dawkins, Sean Penn, Marina Abramovic, Werner Herzog, Terry Gilliam, Anne Marie Slaughter, Reverend Jesse Jackson, Mary Beard, Yuval Noah Harari, Jonathan Franzen, Salman Rushdie, Eric Schmidt, Richard Branson, Professor Brian Cox, Nate Silver, Umberto Eco, Martin Amis and Grayson Perry.

Further digging into WCTBC unearthed this story about the reasons for its ‘reboot’, from the Who we are / Regional Chapters / British Columbia webpage,

“Earlier this month [October 2017?], Christin Wiedemann and Briana Sim, co-Chairs of the BC Chapter of WCT, attended a Women in IoT [Internet of Things] event in Vancouver. The event was organized by the GE Women’s Network and TELUS Connections, with WCT as an event partner. The event sold out after only two days, and close to 200 women attended.

Five female panelists representing different backgrounds and industries talked about the impact IoT is having on our lives today, and how they think IoT fits into the future of the technology landscape. Christin facilitated the Q&A portion of the event, and had an opportunity to share that the BC chapter is rebooting and hopes to launch a kickoff event later in November”

You can find a summary of the event here (http://gereports.ca/theres-lots-room-us-top-insights-five-canadas-top-women-business-leaders-iot/#), and you can also check out the Storify (https://storify.com/cwiedemann/women-in-iot​).”

– October 6th, 2017

Simon Fraser University’s Brave New Work

Coincidentally or not, there’s a major series of events being offered by Simon Fraser University’s (SFU; located in Vancouver, British Columbia, Canada) Public Square Programme in their 2018 Community Summit Series titled: Brave New Work; How can we thrive in the changing world of work? which takes place February 26, 2018 to March 7, 2018.

There’s not a single mention (!!!!!) of Brave New World (by Aldous Huxley) in what is clearly word play based on this man’s book.

From the 2018 Community Summit: Brave New Work webpage on the SFU website (Note: Links have been removed),

How can we thrive in the changing world of work?

The 2018 Community Summit, Brave New Work, invites us to consider how we can all thrive in the changing world of work.

Technological growth is happening at an unprecedented rate and scale, and it is fundamentally altering the way we organize and value work. The work we do (and how we do it) is changing. One of the biggest challenges in effectively responding to this new world of work is creating a shared understanding of the issues at play and how they intersect. Individuals, businesses, governments, educational institutions, and civil society must collaborate to construct the future we want.

The future of work is here, but it’s still ours to define. From February 26th to March 7th, we will convene diverse communities through a range of events and activities to provoke thinking and encourage solution-finding. We hope you’ll join us.

The New World of Work: Thriving or Surviving?

As part of its 2018 Community Summit, Brave New Work, SFU Public Square is proud to present, in partnership with Vancity, an evening with Van Jones and Anne-Marie Slaughter, moderated by CBC’s Laura Lynch at the Queen Elizabeth Theatre.

Van Jones and Anne-Marie Slaughter, two leading commentators on the American economy, will discuss the role that citizens, governments and civil society can play in shaping the future of work. They will explore the challenges ahead, as well as how these challenges might be addressed through green jobs, emergent industries, education and public policy.

Join us for an important conversation about how the future of work can be made to work for all of us.

Are you a member of Vancity? As one of the many perks of being a Vancity member, you have access to a free ticket to attend the event. For your free ticket, please visit Vancity for more information. There are a limited number of seats reserved for Vancity members, so we encourage you to register early.

Tickets are now on sale, get yours today!

Future of Work in Canada: Emerging Trends and Opportunities

What are some of the trends currently defining the new world of work in Canada, and what does our future look like? What opportunities can be seized to build more competitive, prosperous, and inclusive organizations? This mini-conference, presented in partnership with Deloitte Canada, will feature panel discussions and presentations by representatives from Deloitte, Brookfield Institute for Innovation & Entrepreneurship, Vancity, Futurpreneur, and many more.

Work in the 21st Century: Innovations in Research

Research doesn’t just live in libraries and academic papers; it has a profound impact on our day to day lives. Work in the 21st Century is a dynamic evening that showcases the SFU researchers and entrepreneurs who are leading the way in making innovative impacts in the new world of work.

Basic Income

This lecture will examine the question of basic income (BI). A neoliberal version of BI is being considered and even developed by a number of governments and institutions of global capitalism. This form of BI could enhance the supply of low wage precarious workers, by offering a public subsidy to employers, paid for by cuts to others areas of social provision.

ReframeWork

ReframeWork is a national gathering of leading thinkers and innovators on the topic of Future of Work. We will explore how Canada can lead in forming new systems for good work and identify the richest areas of opportunity for solution-building that affects broader change.

The Urban Worker Project Skillshare

The Urban Worker Project Skillshare is a day-long gathering, bringing together over 150 independent workers to lean on each other, learn from each other, get valuable expert advice, and build community. Join us!

SFU City Conversations: Making Visible the Invisible

Are outdated and stereotypical gender roles contributing to the invisible workload? What is the invisible workload anyway? Don’t miss this special edition of SFU City Conversations on intersectionality and invisible labour, presented in partnership with the Simon Fraser Student Society Women’s Centre.

Climate of Work: How Does Climate Change Affect the Future of Work

What does our changing climate have to do with the future of work? Join Embark as they explore the ways our climate impacts different industries such as planning, communications or entrepreneurship.

Symposium: Art, Labour, and the Future of Work

One of the key distinguishing features of Western modernity is that the activity of labour has always been at the heart of our self-understanding. Work defines who we are. But what might we do in a world without work? Join SFU’s Institute for the Humanities for a symposium on art, aesthetics, and self-understanding.

Worker Writers and the Poetics of Labour

If you gave a worker a pen, what would they write? What stories would they tell, and what experiences might they share? Hear poetry about what it is to work in the 21st century directly from participants of the Worker Writers School at this free public poetry reading.

Creating a Diverse and Resilient Economy in Metro Vancouver

This panel conversation event will focus on the future of employment in Metro Vancouver, and planning for the employment lands that support the regional economy. What are the trends and issues related to employment in various sectors in Metro Vancouver, and how does land use planning, regulation, and market demand affect the future of work regionally?

Preparing Students for the Future World of Work

This event, hosted by CACEE Canada West and SFU Career and Volunteer Services, will feature presentations and discussions on how post-secondary institutions can prepare students for the future of work.

Work and Purpose Later in Life

How is the changing world of work affecting older adults? And what role should work play in our lives, anyway? This special Philosophers’ Cafe will address questions of retirement, purpose, and work for older adults.

Beyond Bitcoin: Blockchain and the Future of Work

Blockchain technology is making headlines. Enthusiastic or skeptic, the focus of this dialogue will be to better understand key concepts and to explore the wide-ranging applications of distributed ledgers and the implications for business here in BC and in the global economy.

Building Your Resilience

Being a university student can be stressful. This interactive event will share key strategies for enhancing your resilience and well-being, that will support your success now and in your future career.

We may not be working because of robots (no mention of automation in the SFU descriptions?) but we sure will talk about work-related topics. Sarcasm aside, it’s good to see this interest in work and in public discussion although I’m deeply puzzled by SFU’s decision to seemingly ignore technology, except for blockchain. Thank goodness for WCTBC. At any rate, I’m often somewhat envious of what goes on elsewhere so it’s nice to see this level of excitement and effort here in Vancouver.