Tag Archives: speech recognition

Automated science writing?

It seems that automated science writing is not ready—yet. Still, an April 18, 2019 news item on ScienceDaily suggests that progress is being made,

The work of a science writer, including this one, includes reading journal papers filled with specialized technical terminology, and figuring out how to explain their contents in language that readers without a scientific background can understand.

Now, a team of scientists at MIT [Massachusetts Institute of Technology] and elsewhere has developed a neural network, a form of artificial intelligence (AI), that can do much the same thing, at least to a limited extent: It can read scientific papers and render a plain-English summary in a sentence or two.

An April 17, 2019 MIT news release, which originated the news item, delves into the research and its implications,

Even in this limited form, such a neural network could be useful for helping editors, writers, and scientists [emphasis mine] scan a large number of papers to get a preliminary sense of what they’re about. But the approach the team developed could also find applications in a variety of other areas besides language processing, including machine translation and speech recognition.

The work is described in the journal Transactions of the Association for Computational Linguistics, in a paper by Rumen Dangovski and Li Jing, both MIT graduate students; Marin Soljačić, a professor of physics at MIT; Preslav Nakov, a principal scientist at the Qatar Computing Research Institute, HBKU; and Mićo Tatalović, a former Knight Science Journalism fellow at MIT and a former editor at New Scientist magazine.

From AI for physics to natural language

The work came about as a result of an unrelated project, which involved developing new artificial intelligence approaches based on neural networks, aimed at tackling certain thorny problems in physics. However, the researchers soon realized that the same approach could be used to address other difficult computational problems, including natural language processing, in ways that might outperform existing neural network systems.

“We have been doing various kinds of work in AI for a few years now,” Soljačić says. “We use AI to help with our research, basically to do physics better. And as we got to be  more familiar with AI, we would notice that every once in a while there is an opportunity to add to the field of AI because of something that we know from physics — a certain mathematical construct or a certain law in physics. We noticed that hey, if we use that, it could actually help with this or that particular AI algorithm.”

This approach could be useful in a variety of specific kinds of tasks, he says, but not all. “We can’t say this is useful for all of AI, but there are instances where we can use an insight from physics to improve on a given AI algorithm.”

Neural networks in general are an attempt to mimic the way humans learn certain new things: The computer examines many different examples and “learns” what the key underlying patterns are. Such systems are widely used for pattern recognition, such as learning to identify objects depicted in photos.

But neural networks in general have difficulty correlating information from a long string of data, such as is required in interpreting a research paper. Various tricks have been used to improve this capability, including techniques known as long short-term memory (LSTM) and gated recurrent units (GRU), but these still fall well short of what’s needed for real natural-language processing, the researchers say.

The team came up with an alternative system, which instead of being based on the multiplication of matrices, as most conventional neural networks are, is based on vectors rotating in a multidimensional space. The key concept is something they call a rotational unit of memory (RUM).

Essentially, the system represents each word in the text by a vector in multidimensional space — a line of a certain length pointing in a particular direction. Each subsequent word swings this vector in some direction, represented in a theoretical space that can ultimately have thousands of dimensions. At the end of the process, the final vector or set of vectors is translated back into its corresponding string of words.

“RUM helps neural networks to do two things very well,” Nakov says. “It helps them to remember better, and it enables them to recall information more accurately.”

After developing the RUM system to help with certain tough physics problems such as the behavior of light in complex engineered materials, “we realized one of the places where we thought this approach could be useful would be natural language processing,” says Soljačić,  recalling a conversation with Tatalović, who noted that such a tool would be useful for his work as an editor trying to decide which papers to write about. Tatalović was at the time exploring AI in science journalism as his Knight fellowship project.

“And so we tried a few natural language processing tasks on it,” Soljačić says. “One that we tried was summarizing articles, and that seems to be working quite well.”

The proof is in the reading

As an example, they fed the same research paper through a conventional LSTM-based neural network and through their RUM-based system. The resulting summaries were dramatically different.

The LSTM system yielded this highly repetitive and fairly technical summary: “Baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat and has caused disease like blindness or severe consequences. This infection, termed “baylisascariasis,” kills mice, has endangered the allegheny woodrat.

Based on the same paper, the RUM system produced a much more readable summary, and one that did not include the needless repetition of phrases: Urban raccoons may infect people more than previously assumed. 7 percent of surveyed individuals tested positive for raccoon roundworm antibodies. Over 90 percent of raccoons in Santa Barbara play host to this parasite.

Already, the RUM-based system has been expanded so it can “read” through entire research papers, not just the abstracts, to produce a summary of their contents. The researchers have even tried using the system on their own research paper describing these findings — the paper that this news story is attempting to summarize.

Here is the new neural network’s summary: Researchers have developed a new representation process on the rotational unit of RUM, a recurrent memory that can be used to solve a broad spectrum of the neural revolution in natural language processing.

It may not be elegant prose, but it does at least hit the key points of information.

Çağlar Gülçehre, a research scientist at the British AI company Deepmind Technologies, who was not involved in this work, says this research tackles an important problem in neural networks, having to do with relating pieces of information that are widely separated in time or space. “This problem has been a very fundamental issue in AI due to the necessity to do reasoning over long time-delays in sequence-prediction tasks,” he says. “Although I do not think this paper completely solves this problem, it shows promising results on the long-term dependency tasks such as question-answering, text summarization, and associative recall.”

Gülçehre adds, “Since the experiments conducted and model proposed in this paper are released as open-source on Github, as a result many researchers will be interested in trying it on their own tasks. … To be more specific, potentially the approach proposed in this paper can have very high impact on the fields of natural language processing and reinforcement learning, where the long-term dependencies are very crucial.”

The research received support from the Army Research Office, the National Science Foundation, the MIT-SenseTime Alliance on Artificial Intelligence, and the Semiconductor Research Corporation. The team also had help from the Science Daily website, whose articles were used in training some of the AI models in this research.

As usual, this ‘automated writing system’ is framed as a ‘helper’ not an usurper of anyone’s job. However, its potential for changing the nature of the work is there. About five years ago I featured another ‘automated writing’ story in a July 16, 2014 posting titled: ‘Writing and AI or is a robot writing this blog?’ You may have been reading ‘automated’ news stories for years. At the time, the focus was on sports and business.

Getting back to 2019 and science writing, here’s a link to and a citation for the paper,

Rotational Unit of Memory: A Novel Representation Unit for RNNs with Scalable Applications by Rumen Dangovski, Li Jing, Preslav Nakov, Mićo Tatalović and Marin Soljačić. Transactions of the Association for Computational Linguistics Volume 07, 2019 pp.121-138 DOI: https://doi.org/10.1162/tacl_a_00258 Posted Online 2019

© 2019 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.

This paper is open access.

Brainy and brainy: a novel synaptic architecture and a neuromorphic computing platform called SpiNNaker

I have two items about brainlike computing. The first item hearkens back to memristors, a topic I have been following since 2008. (If you’re curious about the various twists and turns just enter  the term ‘memristor’ in this blog’s search engine.) The latest on memristors is from a team than includes IBM (US), École Politechnique Fédérale de Lausanne (EPFL; Swizterland), and the New Jersey Institute of Technology (NJIT; US). The second bit comes from a Jülich Research Centre team in Germany and concerns an approach to brain-like computing that does not include memristors.

Multi-memristive synapses

In the inexorable march to make computers function more like human brains (neuromorphic engineering/computing), an international team has announced its latest results in a July 10, 2018 news item on Nanowerk,

Two New Jersey Institute of Technology (NJIT) researchers, working with collaborators from the IBM Research Zurich Laboratory and the École Polytechnique Fédérale de Lausanne, have demonstrated a novel synaptic architecture that could lead to a new class of information processing systems inspired by the brain.

The findings are an important step toward building more energy-efficient computing systems that also are capable of learning and adaptation in the real world. …

A July 10, 2018 NJIT news release (also on EurekAlert) by Tracey Regan, which originated by the news item, adds more details,

The researchers, Bipin Rajendran, an associate professor of electrical and computer engineering, and S. R. Nandakumar, a graduate student in electrical engineering, have been developing brain-inspired computing systems that could be used for a wide range of big data applications.

Over the past few years, deep learning algorithms have proven to be highly successful in solving complex cognitive tasks such as controlling self-driving cars and language understanding. At the heart of these algorithms are artificial neural networks – mathematical models of the neurons and synapses of the brain – that are fed huge amounts of data so that the synaptic strengths are autonomously adjusted to learn the intrinsic features and hidden correlations in these data streams.

However, the implementation of these brain-inspired algorithms on conventional computers is highly inefficient, consuming huge amounts of power and time. This has prompted engineers to search for new materials and devices to build special-purpose computers that can incorporate the algorithms. Nanoscale memristive devices, electrical components whose conductivity depends approximately on prior signaling activity, can be used to represent the synaptic strength between the neurons in artificial neural networks.

While memristive devices could potentially lead to faster and more power-efficient computing systems, they are also plagued by several reliability issues that are common to nanoscale devices. Their efficiency stems from their ability to be programmed in an analog manner to store multiple bits of information; however, their electrical conductivities vary in a non-deterministic and non-linear fashion.

In the experiment, the team showed how multiple nanoscale memristive devices exhibiting these characteristics could nonetheless be configured to efficiently implement artificial intelligence algorithms such as deep learning. Prototype chips from IBM containing more than one million nanoscale phase-change memristive devices were used to implement a neural network for the detection of hidden patterns and correlations in time-varying signals.

“In this work, we proposed and experimentally demonstrated a scheme to obtain high learning efficiencies with nanoscale memristive devices for implementing learning algorithms,” Nandakumar says. “The central idea in our demonstration was to use several memristive devices in parallel to represent the strength of a synapse of a neural network, but only chose one of them to be updated at each step based on the neuronal activity.”

Here’s a link to and a citation for the paper,

Neuromorphic computing with multi-memristive synapses by Irem Boybat, Manuel Le Gallo, S. R. Nandakumar, Timoleon Moraitis, Thomas Parnell, Tomas Tuma, Bipin Rajendran, Yusuf Leblebici, Abu Sebastian, & Evangelos Eleftheriou. Nature Communications volume 9, Article number: 2514 (2018) DOI: https://doi.org/10.1038/s41467-018-04933-y Published 28 June 2018

This is an open access paper.

Also they’ve got a couple of very nice introductory paragraphs which I’m including here, (from the June 28, 2018 paper in Nature Communications; Note: Links have been removed),

The human brain with less than 20 W of power consumption offers a processing capability that exceeds the petaflops mark, and thus outperforms state-of-the-art supercomputers by several orders of magnitude in terms of energy efficiency and volume. Building ultra-low-power cognitive computing systems inspired by the operating principles of the brain is a promising avenue towards achieving such efficiency. Recently, deep learning has revolutionized the field of machine learning by providing human-like performance in areas, such as computer vision, speech recognition, and complex strategic games1. However, current hardware implementations of deep neural networks are still far from competing with biological neural systems in terms of real-time information-processing capabilities with comparable energy consumption.

One of the reasons for this inefficiency is that most neural networks are implemented on computing systems based on the conventional von Neumann architecture with separate memory and processing units. There are a few attempts to build custom neuromorphic hardware that is optimized to implement neural algorithms2,3,4,5. However, as these custom systems are typically based on conventional silicon complementary metal oxide semiconductor (CMOS) circuitry, the area efficiency of such hardware implementations will remain relatively low, especially if in situ learning and non-volatile synaptic behavior have to be incorporated. Recently, a new class of nanoscale devices has shown promise for realizing the synaptic dynamics in a compact and power-efficient manner. These memristive devices store information in their resistance/conductance states and exhibit conductivity modulation based on the programming history6,7,8,9. The central idea in building cognitive hardware based on memristive devices is to store the synaptic weights as their conductance states and to perform the associated computational tasks in place.

The two essential synaptic attributes that need to be emulated by memristive devices are the synaptic efficacy and plasticity. …

It gets more complicated from there.

Now onto the next bit.

SpiNNaker

At a guess, those capitalized N’s are meant to indicate ‘neural networks’. As best I can determine, SpiNNaker is not based on the memristor. Moving on, a July 11, 2018 news item on phys.org announces work from a team examining how neuromorphic hardware and neuromorphic software work together,

A computer built to mimic the brain’s neural networks produces similar results to that of the best brain-simulation supercomputer software currently used for neural-signaling research, finds a new study published in the open-access journal Frontiers in Neuroscience. Tested for accuracy, speed and energy efficiency, this custom-built computer named SpiNNaker, has the potential to overcome the speed and power consumption problems of conventional supercomputers. The aim is to advance our knowledge of neural processing in the brain, to include learning and disorders such as epilepsy and Alzheimer’s disease.

A July 11, 2018 Frontiers Publishing news release on EurekAlert, which originated the news item, expands on the latest work,

“SpiNNaker can support detailed biological models of the cortex–the outer layer of the brain that receives and processes information from the senses–delivering results very similar to those from an equivalent supercomputer software simulation,” says Dr. Sacha van Albada, lead author of this study and leader of the Theoretical Neuroanatomy group at the Jülich Research Centre, Germany. “The ability to run large-scale detailed neural networks quickly and at low power consumption will advance robotics research and facilitate studies on learning and brain disorders.”

The human brain is extremely complex, comprising 100 billion interconnected brain cells. We understand how individual neurons and their components behave and communicate with each other and on the larger scale, which areas of the brain are used for sensory perception, action and cognition. However, we know less about the translation of neural activity into behavior, such as turning thought into muscle movement.

Supercomputer software has helped by simulating the exchange of signals between neurons, but even the best software run on the fastest supercomputers to date can only simulate 1% of the human brain.

“It is presently unclear which computer architecture is best suited to study whole-brain networks efficiently. The European Human Brain Project and Jülich Research Centre have performed extensive research to identify the best strategy for this highly complex problem. Today’s supercomputers require several minutes to simulate one second of real time, so studies on processes like learning, which take hours and days in real time are currently out of reach.” explains Professor Markus Diesmann, co-author, head of the Computational and Systems Neuroscience department at the Jülich Research Centre.

He continues, “There is a huge gap between the energy consumption of the brain and today’s supercomputers. Neuromorphic (brain-inspired) computing allows us to investigate how close we can get to the energy efficiency of the brain using electronics.”

Developed over the past 15 years and based on the structure and function of the human brain, SpiNNaker — part of the Neuromorphic Computing Platform of the Human Brain Project — is a custom-built computer composed of half a million of simple computing elements controlled by its own software. The researchers compared the accuracy, speed and energy efficiency of SpiNNaker with that of NEST–a specialist supercomputer software currently in use for brain neuron-signaling research.

“The simulations run on NEST and SpiNNaker showed very similar results,” reports Steve Furber, co-author and Professor of Computer Engineering at the University of Manchester, UK. “This is the first time such a detailed simulation of the cortex has been run on SpiNNaker, or on any neuromorphic platform. SpiNNaker comprises 600 circuit boards incorporating over 500,000 small processors in total. The simulation described in this study used just six boards–1% of the total capability of the machine. The findings from our research will improve the software to reduce this to a single board.”

Van Albada shares her future aspirations for SpiNNaker, “We hope for increasingly large real-time simulations with these neuromorphic computing systems. In the Human Brain Project, we already work with neuroroboticists who hope to use them for robotic control.”

Before getting to the link and citation for the paper, here’s a description of SpiNNaker’s hardware from the ‘Spiking neural netowrk’ Wikipedia entry, Note: Links have been removed,

Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture) [emphasis mine], designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]

Now for the link and citation,

Performance Comparison of the Digital Neuromorphic Hardware SpiNNaker and the Neural Network Simulation Software NEST for a Full-Scale Cortical Microcircuit Model by
Sacha J. van Albada, Andrew G. Rowley, Johanna Senk, Michael Hopkins, Maximilian Schmidt, Alan B. Stokes, David R. Lester, Markus Diesmann, and Steve B. Furber. Neurosci. 12:291. doi: 10.3389/fnins.2018.00291 Published: 23 May 2018

As noted earlier, this is an open access paper.

smARTcities SALON in Vaughan, Ontario, Canada on March 22, 2018

Thank goodness for the March 15, 2018 notice from the Art/Sci Salon in Toronto (received via email) announcing an event on smart cities being held in the nearby city of Vaughan (it borders Toronto to the north). It’s led me on quite the chase as I’ve delved into a reference to Smart City projects taking place across the country and the results follow after this bit about the event.

smARTcities SALON

From the announcement,

SMARTCITIES SALON

Smart City projects are currently underway across the country, including
Google SideWalk at Toronto Harbourfront. Canada’s first Smart Hospital
is currently under construction in the City of Vaughan. It’s an example
of the city working towards building a reputation as one of the world’s
leading Smart Cities, by adopting new technologies consistent with
priorities defined by citizen collaboration.

Hon. Maurizio Bevilacqua, P.C., Mayor chairs the Smart City Advisory
Task Force leading historic transformation in Vaughan. Working to become
a Smart City is a chance to encourage civic engagement, accelerate
economic growth, and generate efficiencies. His opening address will
outline some of the priorities and opportunities that our panel will
discuss.

PANELISTS

Lilian Radovac, PhD., Assistant Professor, Institute of Communication,
Culture, Information & Technology, University of Toronto. Lilian is a
historian of urban sounds and cultures and has a critical interest in
SmartCity initiatives in two of the cities she has called home: New York
City and Toronto..

Oren Berkovich is the CEO of Singularity University in Canada, an
educational institution and a global network of experts and
entrepreneurs that work together on solving the world’s biggest
challenges. As a catalyst for long-term growth Oren spends his time
connecting people with ideas to facilitate strategic conversations about
the future.

Frank Di Palma, the Chief Information Officer for the City of Vaughan,
is a graduate of York University with more than 20 years experience in
IT operations and services. Frank leads the many SmartCity initiatives
already underway at Vaughan City Hall.

Ron Wild, artist and Digital Art/Science Collaborator, will moderate the
discussion.

Audience Participation opportunities will enable attendees to forward
questions for consideration by the panel.

You can register for the smARTcities SALON here on Eventbrite,

Art Exhibition Reception

Following the panel discussion, the audience is invited to view the art exhibition ‘smARTcities; exploring the digital frontier.’ Works commissioned by Vaughan specifically for the exhibition, including the SmartCity Map and SmartHospital Map will be shown as well as other Art/Science-themed works. Many of these ‘maps’ were made by Ron in collaboration with mathematicians, scientists, and medical researchers, some of who will be in attendance. Further examples of Ron’s art can be found HERE

Please click through to buy a FREE ticket so we know how many guests to expect. Thank you.

This event can be reached by taking the subway up the #1 west line to the new Vaughan Metropolitan Centre terminal station. Take the #20 bus to the Vaughan Mills transfer loop; transfer there to the #4/A which will take you to the stop right at City Hall. Free parking is available for those coming by car. Car-pooling and ride-sharing is encouraged. The facility is fully accessible.

Here’s one of Wild’s pieces,

144×96″ triptych, Vaughan, 2018 Artist: mrowade (Ron Wild?)

I’m pretty sure that mrowade is Ron Wild.

Smart Cities, the rest of the country, and Vancouver

Much to my surprise, I covered the ‘Smart Cities’ story in its early (but not earliest) days (and before it was Smart Cities) in two posts: January 30, 2015 and January 27,2016 about the National Research Council of Canada (NRC) and its cities and technology public engagement exercises.

David Vogt in a July 12, 2016 posting on the Urban Opus website provides some catch up information,

Canada’s National Research Council (NRC) has identified Cities of the Future as a game-changing technology and economic opportunity.  Following a national dialogue, an Executive Summit was held in Toronto on March 31, 2016, resulting in an important summary report that will become the seed for Canadian R&D strategy in this sector.

The conclusion so far is that the opportunity for Canada is to muster leadership in the following three areas (in order):

  1. Better Infrastructure and Infrastructure Management
  2. Efficient Transportation; and
  3. Renewable Energy

The National Research Council (NRC) offers a more balanced view of the situation on its “NRC capabilities in smart infrastructure and cities of the future” webpage,

Key opportunities for Canada

North America is one of the most urbanised regions in the world (82 % living in urban areas in 2014).
With growing urbanisation, sustainable development challenges will be increasingly concentrated in cities, requiring technology solutions.
Smart cities are data-driven, relying on broadband and telecommunications, sensors, social media, data collection and integration, automation, analytics and visualization to provide real-time situational analysis.
Most infrastructure will be “smart” by 2030 and transportation systems will be intelligent, adaptive and connected.
Renewable energy, energy storage, power quality and load measurement will contribute to smart grid solutions that are integrated with transportation.
“Green”, sustainable and high-performing construction and infrastructure materials are in demand.

Canadian challenges

High energy use: Transportation accounts for roughly 23% of Canada’s total greenhouse gas emissions, followed closely by the energy consumption of buildings, which accounts for 12% of Canada’s greenhouse gas emissions (Canada’s United Nations Framework Convention on Climate Change report).
Traffic congestion in Canadian cities is increasing, contributing to loss of productivity, increased stress for citizens as well as air and noise pollution.
Canadian cities are susceptible to extreme weather and events related to climate change (e.g., floods, storms).
Changing demographics: aging population (need for accessible transportation options, housing, medical and recreational services) and diverse (immigrant) populations.
Financial and jurisdictional issues: the inability of municipalities (who have primary responsibility) to finance R&D or large-scale solutions without other government assistance.

Opportunities being examined
Living lab

Test bed for smart city technology in order to quantify and demonstrate the benefits of smart cities.
Multiple partnering opportunities (e.g. municipalities, other government organizations, industry associations, universities, social sciences, urban planning).

The integrated city

Efficient transportation: integration of personal mobility and freight movement as key city and inter-city infrastructure.
Efficient and integrated transportation systems linked to city infrastructure.
Planning urban environments for mobility while repurposing redundant infrastructures (converting parking to the food-water-energy nexus) as population shifts away from personal transportation.

FOOD-WATER-ENERGY NEXUS

Sustainable urban bio-cycling.
‎System approach to the development of the technology platforms required to address the nexus.

Key enabling platform technologies
Artificial intelligence

Computer vision and image understanding
Adaptive robots; future robotic platforms for part manufacturing
Understanding human emotions from language
Next generation information extraction using deep learning
Speech recognition
Artificial intelligence to optimize talent management for human resources

Nanomaterials

Nanoelectronics
Nanosensing
Smart materials
Nanocomposites
Self-assembled nanostructures
Nanoimprint
Nanoplasmonic
Nanoclay
Nanocoating

Big data analytics

Predictive equipment maintenance
Energy management
Artificial intelligence for optimizing energy storage and distribution
Understanding and tracking of hazardous chemical elements
Process and design optimization

Printed electronics for Internet of Things

Inks and materials
Printing technologies
Large area, flexible, stretchable, printed electronics components
Applications: sensors for Internet of Things, wearables, antenna, radio-frequency identification tags, smart surfaces, packaging, security, signage

If you’re curious about the government’s plan with regard to implementation, this NRC webpage provides some fascinating insight into their hopes if not the reality. (I have mentioned artificial intelligence and the federal government before in a March 16, 2018 posting about the federal budget and science; scroll down approximately 50% of the way to the subsection titled, Budget 2018: Who’s watching over us? and scan for Michael Karlin’s name.)

As for the current situation, there’s a Smart Cities Challenge taking place. Both Toronto and Vancouver have webpages dedicated to their response to the challenge. (You may want to check your own city’s website to find if it’s participating.)I have a preference for the Toronto page as they immediately state that they’re participating in this challenge and they provide an explanation for what they want from you. Vancouver’s page is by comparison a bit confusing with two videos being immediately presented to the reader and from there too many graphics competing for your attention. They do, however, offer something valuable, links to explanations for smart cities and for the challenge.

Here’s a description of the Smart Cities Challenge (from its webpage),

The Smart Cities Challenge

The Smart Cities Challenge is a pan-Canadian competition open to communities of all sizes, including municipalities, regional governments and Indigenous communities (First Nations, Métis and Inuit). The Challenge encourages communities to adopt a smart cities approach to improve the lives of their residents through innovation, data and connected technology.

  • One prize of up to $50 million open to all communities, regardless of population;
  • Two prizes of up to $10 million open to all communities with populations under 500,000 people; and
  • One prize of up to $5 million open to all communities with populations under 30,000 people.

Infrastructure Canada is engaging Indigenous leaders, communities and organizations to finalize the design of a competition specific to Indigenous communities that will reflect their unique realities and issues. Indigenous communities are also eligible to compete for all the prizes in the current competition.

The Challenge will be an open and transparent process. Communities that submit proposals will also post them online, so that residents and stakeholders can see them. An independent Jury will be appointed to select finalists and winners.

Applications are due by April 24, 2018. Communities interested in participating should visit the
Impact Canada Challenge Platform for the applicant guide and more information.

Finalists will be announced in the Summer of 2018 and winners in Spring 2019 according to the information on the Impact Canada Challenge Platform.

It’s not clear to me if she’s leading Vancouver’s effort to win the Smart Cities Challenge but Jessie Adcock’s (City of Vancouver Chief Digital Officer) Twitter feed certainly features information on the topic and, I suspect, if you’re looking for the most up-to-date information on Vancovuer’s participation, you’re more likely to find it on her feed than on the City of Vancouver’s Smart Cities Challenge webpage.