Tag Archives: cybersecurity

‘One health in the 21st century’ event and internship opportunities at the Woodrow Wilson Center

One health

This event at the Woodrow Wilson International Center for Scholars (Wilson Center) is the first that I’ve seen of its kind (from a November 2, 2018 Wilson Center Science and Technology Innovation Program [STIP] announcement received via email; Note: Logistics such as date and location follow directly after),

One Health in the 21st Century Workshop

The  One Health in the 21st Century workshop will serve as a snapshot of government, intergovernmental organization and non-governmental organization innovation as it pertains to the expanding paradigm of One Health. One Health being the umbrella term for addressing animal, human, and environmental health issues as inextricably linked [emphasis mine], each informing the other, rather than as distinct disciplines.

This snapshot, facilitated by a partnership between the Wilson Center, World Bank, and EcoHealth Alliance, aims to bridge professional silos represented at the workshop to address the current gaps and future solutions in the operationalization and institutionalization of One Health across sectors. With an initial emphasis on environmental resource management and assessment as well as federal cooperation, the One Health in the 21st Century Workshop is a launching point for upcoming events, convenings, and products, sparked by the partnership between the hosting organizations. RSVP today.

Agenda:

1:00pm — 1:15pm: Introductory Remarks

1:15pm — 2:30pm: Keynote and Panel: Putting One Health into Practice

Larry Madoff — Director of Emerging Disease Surveillance; Editor, ProMED-mail
Lance Brooks — Chief, Biological Threat Reduction Department at DoD
Further panelists TBA

2:30pm — 2:40pm: Break

2:40pm — 3:50pm: Keynote and Panel: Adding Seats at the One Health Table: Promoting the Environmental Backbone at Home and Abroad

Assaf Anyamba — NASA Research Scientist
Jonathan Sleeman — Center Director for the U.S. Geological Survey’s National Wildlife Health Center
Jennifer Orme-Zavaleta — Principal Deputy Assistant Administrator for Science for the Office of Research and Development and the EPA Science Advisor
Further panelists TBA

3:50pm — 4:50pm: Breakout Discussions and Report Back Panel

4:50pm — 5:00pm: Closing Remarks

5:00pm — 6:00pm: Networking Happy Hour

Co-Hosts:

Sponsor Logos

You can register/RSVP here.

Logistics are:

November 26
1:00pm – 5:00pm
Reception to follow
5:00pm – 6:00pm

Flom Auditorium, 6th floor

Directions

Wilson Center
Ronald Reagan Building and
International Trade Center
One Woodrow Wilson Plaza
1300 Pennsylvania, Ave., NW
Washington, D.C. 20004

Phone: 202.691.4000

stip@wilsoncenter.org

Privacy Policy

Internships

The Woodrow Wilson Center is gearing up for 2019 although the deadline for a Spring 2019  November 15, 2018. (You can find my previous announcement for internships in a July 23, 2018 posting). From a November 5, 2018 Wilson Center STIP announcement (received via email),

Internships in DC for Science and Technology Policy

Deadline for Fall Applicants November 15

The Science and Technology Innovation Program (STIP) at the Wilson Center welcomes applicants for spring 2019 internships. STIP focuses on understanding bottom-up, public innovation; top-down, policy innovation; and, on supporting responsible and equitable practices at the point where new technology and existing political, social, and cultural processes converge. We recommend exploring our blog and website first to determine if your research interests align with current STIP programming.

We offer two types of internships: research (open to law and graduate students only) and a social media and blogging internship (open to undergraduates, recent graduates, and graduate students). Research internships might deal with one of the following key objectives:

  • Artificial Intelligence
  • Citizen Science
  • Cybersecurity
  • One Health
  • Public Communication of Science
  • Serious Games Initiative
  • Science and Technology Policy

Additionally, we are offering specific internships for focused projects, such as for our Earth Challenge 2020 initiative.

Special Project Intern: Earth Challenge 2020

Citizen science involves members of the public in scientific research to meet real world goals.  In celebration of the 50th anniversary of Earth Day, Earth Day Network (EDN), The U.S. Department of State, and the Wilson Center are launching Earth Challenge 2020 (EC2020) as the world’s largest ever coordinated citizen science campaign.  EC2020 will collaborate with existing citizen science projects as well as build capacity for new ones as part of a larger effort to grow citizen science worldwide.  We will become a nexus for collecting billions of observations in areas including air quality, water quality, biodiversity, and human health to strengthen the links between science, the environment, and public citizens.

We are seeking a research intern with a specialty in topics including citizen science, crowdsourcing, making, hacking, sensor development, and other relevant topics.

This intern will scope and implement a semester-long project related to Earth Challenge 2020 deliverables. In addition to this the intern may:

  • Conduct ad hoc research on a range of topics in science and technology innovation to learn while supporting department priorities.
  • Write or edit articles and blog posts on topics of interest or local events.
  • Support meetings, conferences, and other events, gaining valuable event management experience.
  • Provide general logistical support.

This is a paid position available for 15-20 hours a week.  Applicants from all backgrounds will be considered, though experience conducting cross and trans-disciplinary research is an asset.  Ability to work independently is critical.

Interested applicants should submit a resume, cover letter describing their interest in Earth Challenge 2020 and outlining relevant skills, and two writing samples. One writing sample should be formal (e.g., a class paper); the other, informal (e.g., a blog post or similar).

For all internships, non-degree seeking students are ineligible. All internships must be served in Washington, D.C. and cannot be done remotely.

Full application process outlined on our internship website.

I don’t see a specific application deadline for the special project (Earth Challenge 2010) internship. In any event, good luck with all your applications.

D-Wave and the first large-scale quantum simulation of topological state of matter

This is all about a local (Burnaby is one of the metro Vancouver municipalities) quantum computing companies, D-Wave Systems. The company has been featured here from time to time. It’s usually about about their quantum technology (they are considered a technology star in local and [I think] other circles) but my March 9, 2018 posting about the SXSW (South by Southwest) festival noted that Bo Ewald, President, D-Wave Systems US, was a member of the ‘Quantum Computing: Science Fiction to Science Fact’ panel.

Now, they’re back making technology announcements like this August 22, 2018 news item on phys.org (Note: Links have been removed),

D-Wave Systems today [August 22, 2018] published a milestone study demonstrating a topological phase transition using its 2048-qubit annealing quantum computer. This complex quantum simulation of materials is a major step toward reducing the need for time-consuming and expensive physical research and development.

The paper, entitled “Observation of topological phenomena in a programmable lattice of 1,800 qubits”, was published in the peer-reviewed journal Nature. This work marks an important advancement in the field and demonstrates again that the fully programmable D-Wave quantum computer can be used as an accurate simulator of quantum systems at a large scale. The methods used in this work could have broad implications in the development of novel materials, realizing Richard Feynman’s original vision of a quantum simulator. This new research comes on the heels of D-Wave’s recent Science paper demonstrating a different type of phase transition in a quantum spin-glass simulation. The two papers together signify the flexibility and versatility of the D-Wave quantum computer in quantum simulation of materials, in addition to other tasks such as optimization and machine learning.

An August 22, 2108 D-Wave Systems news release (also on EurekAlert), which originated the news item, delves further (Note: A link has been removed),

In the early 1970s, theoretical physicists Vadim Berezinskii, J. Michael Kosterlitz and David Thouless predicted a new state of matter characterized by nontrivial topological properties. The work was awarded the Nobel Prize in Physics in 2016. D-Wave researchers demonstrated this phenomenon by programming the D-Wave 2000Q™ system to form a two-dimensional frustrated lattice of artificial spins. The observed topological properties in the simulated system cannot exist without quantum effects and closely agree with theoretical predictions.

“This paper represents a breakthrough in the simulation of physical systems which are otherwise essentially impossible,” said 2016 Nobel laureate Dr. J. Michael Kosterlitz. “The test reproduces most of the expected results, which is a remarkable achievement. This gives hope that future quantum simulators will be able to explore more complex and poorly understood systems so that one can trust the simulation results in quantitative detail as a model of a physical system. I look forward to seeing future applications of this simulation method.”

“The work described in the Nature paper represents a landmark in the field of quantum computation: for the first time, a theoretically predicted state of matter was realized in quantum simulation before being demonstrated in a real magnetic material,” said Dr. Mohammad Amin, chief scientist at D-Wave. “This is a significant step toward reaching the goal of quantum simulation, enabling the study of material properties before making them in the lab, a process that today can be very costly and time consuming.”

“Successfully demonstrating physics of Nobel Prize-winning importance on a D-Wave quantum computer is a significant achievement in and of itself. But in combination with D-Wave’s recent quantum simulation work published in Science, this new research demonstrates the flexibility and programmability of our system to tackle recognized, difficult problems in a variety of areas,” said Vern Brownell, D-Wave CEO.

“D-Wave’s quantum simulation of the Kosterlitz-Thouless transition is an exciting and impactful result. It not only contributes to our understanding of important problems in quantum magnetism, but also demonstrates solving a computationally hard problem with a novel and efficient mapping of the spin system, requiring only a limited number of qubits and opening new possibilities for solving a broader range of applications,” said Dr. John Sarrao, principal associate director for science, technology, and engineering at Los Alamos National Laboratory.

“The ability to demonstrate two very different quantum simulations, as we reported in Science and Nature, using the same quantum processor, illustrates the programmability and flexibility of D-Wave’s quantum computer,” said Dr. Andrew King, principal investigator for this work at D-Wave. “This programmability and flexibility were two key ingredients in Richard Feynman’s original vision of a quantum simulator and open up the possibility of predicting the behavior of more complex engineered quantum systems in the future.”

The achievements presented in Nature and Science join D-Wave’s continued work with world-class customers and partners on real-world prototype applications (“proto-apps”) across a variety of fields. The 70+ proto-apps developed by customers span optimization, machine learning, quantum material science, cybersecurity, and more. Many of the proto-apps’ results show that D-Wave systems are approaching, and sometimes surpassing, conventional computing in terms of performance or solution quality on real problems, at pre-commercial scale. As the power of D-Wave systems and software expands, these proto-apps point to the potential for scaled customer application advantage on quantum computers.

The company has prepared a video describing Richard Feynman’s proposal about quantum computing and celebrating their latest achievement,

Here’s the company’s Youtube video description,

In 1982, Richard Feynman proposed the idea of simulating the quantum physics of complex systems with a programmable quantum computer. In August 2018, his vision was realized when researchers from D-Wave Systems and the Vector Institute demonstrated the simulation of a topological phase transition—the subject of the 2016 Nobel Prize in Physics—in a fully programmable D-Wave 2000Q™ annealing quantum computer. This complex quantum simulation of materials is a major step toward reducing the need for time-consuming and expensive physical research and development.

You may want to check out the comments in response to the video.

Here’s a link to and a citation for the Nature paper,

Observation of topological phenomena in a programmable lattice of 1,800 qubits by Andrew D. King, Juan Carrasquilla, Jack Raymond, Isil Ozfidan, Evgeny Andriyash, Andrew Berkley, Mauricio Reis, Trevor Lanting, Richard Harris, Fabio Altomare, Kelly Boothby, Paul I. Bunyk, Colin Enderud, Alexandre Fréchette, Emile Hoskinson, Nicolas Ladizinsky, Travis Oh, Gabriel Poulin-Lamarre, Christopher Rich, Yuki Sato, Anatoly Yu. Smirnov, Loren J. Swenson, Mark H. Volkmann, Jed Whittaker, Jason Yao, Eric Ladizinsky, Mark W. Johnson, Jeremy Hilton, & Mohammad H. Amin. Nature volume 560, pages456–460 (2018) DOI: https://doi.org/10.1038/s41586-018-0410-x Published 22 August 2018

This paper is behind a paywall but, for those who don’t have access, there is a synopsis here.

For anyone curious about the earlier paper published in July 2018, here’s a link and a citation,

Phase transitions in a programmable quantum spin glass simulator by R. Harris, Y. Sato, A. J. Berkley, M. Reis, F. Altomare, M. H. Amin, K. Boothby, P. Bunyk, C. Deng, C. Enderud, S. Huang, E. Hoskinson, M. W. Johnson, E. Ladizinsky, N. Ladizinsky, T. Lanting, R. Li, T. Medina, R. Molavi, R. Neufeld, T. Oh, I. Pavlov, I. Perminov, G. Poulin-Lamarre, C. Rich, A. Smirnov, L. Swenson, N. Tsai, M. Volkmann, J. Whittaker, J. Yao. Science 13 Jul 2018: Vol. 361, Issue 6398, pp. 162-165 DOI: 10.1126/science.aat2025

This paper too is behind a paywall.

You can find out more about D-Wave here.

London gets its first Chief Digital Officer (CDO)

A report commissioned from 2thinknow by Business Insider ranks the 25 most high-tech cities in the world (Vancouver, Canada rates as 14th on this list) is featured in an Aug. 25, 2017 news item on the Daily Hive; Vancouver,

The ranking was selected on 10 factors related to technological advancement, which included the number of patents filed per capita, startups, tech venture capitalists, ranking in other innovation datasets, and level of smartphone use.

Topping the list, which was released this month, is San Fransisco’s “Silicon Valley,” which “wins in just about every category.” New York comes in second place, followed by London [UK; emphasis mine], Los Angeles, and Seoul.

Intriguingly, London’s Mayor Sadiq Khan announced a new Chief Digital Officer for the city just a few days later. From an August 29, 2017 news item by Michael Moore for Beta News,

Theo Blackwell, a former cabinet member at Camden Council, will take responsibility for helping London continue to be the technology powerhouse it has become over the past few years.

Mr Blackwell will work closely with the Mayor’s office, particularly the Smart London Board, to create a new “Smart London Plan” that looks to outline how the capital can benefit from embracing new technologies, with cybersecurity, open data and connectivity all at the forefront.

He will also look to build collaboration across London’s boroughs when it comes to public technology schemes, and encourage the digital transformation of public services.

“The new chief digital officer post is an amazing opportunity to make our capital even more open to innovation, support jobs and investment and make our public services more effective,” he said in a statement.

An August 25, 2017 Mayor of London press release, which originated the news item, provides a more detailed look at the position and the motives for creating it,

The Mayor of London, Sadiq Khan, has today (25 August [2017]) appointed Theo Blackwell as the capital’s first ever Chief Digital Officer (CDO).

As London’s first CDO, Theo will play a leading role in realising the Mayor’s ambition to make London the world’s smartest city, ensuring that the capital’s status as a global tech hub helps transform the way public services are designed and delivered, making them more accessible, efficient and responsive to the needs of Londoners. The appointment fulfils a key manifesto commitment made by the Mayor.

He joins the Mayor’s team following work at GovTech accelerator Public Group, advising start-ups on the growing market in local public services, and was previously Head of Policy & Public Affairs for the video games industry’s trade body, Ukie – where he ran a ‘Next Gen Skills’ campaign to get coding back on the curriculum.

Theo brings more than 20 years of experience in technology and digital transformation in both the public and private sector.  In his role as cabinet member for finance, technology and growth at Camden Council, Theo has established Camden as London’s leading digital borough through its use of public data – and this year they received national recognition as Digital Leaders ‘Council of the year’.

Theo also sits on the Advisory Board of Digital Leaders and is a director of Camden Town Unlimited, a Business Improvement District which pioneered new start-up incubation in ‘meanwhile’ space.

Theo will work closely with the Mayor’s Smart London Board to develop a new Smart London Plan, and will play a central role in building collaboration across London’s boroughs, and businesses, to drive the digital transformation of public services, as well as supporting the spread of innovation through common technology standards and better data-sharing.

Theo will also promote manifesto ambitions around pan-London collaboration on connectivity, digital inclusion, cyber-security and open data. He will also focus on scoping work for the London Office for Technology & Innovation that was announced by the Mayor at London Tech Week.

London already has more than 47,000 digital technology companies, employing approximately 240,000 people. It is forecast that the number of tech companies will increase by a third and a further 44,500 jobs will have been created by 2026.

The capital is also racing ahead with new technologies, using it for ticketing and contactless on the transport network, while the London Datastore is an open resource with vast amounts of data about all areas of the city, and tech start-ups have used this open data to create innovative new apps.

The Mayor of London, Sadiq Khan, said:

I am determined to make London the world’s leading ‘smart city’ with digital technology and data at the heart of making our capital a better place to live, work and visit. We already lead in digital technology, data science and innovation and I want us to make full use of this in transforming our public services for Londoners and the millions of visitors to our great city.

I am delighted to appoint Theo Blackwell as London’s first Chief Digital Officer, and I know he will use his experience working in the technology sector and developing public services to improve the lives of all Londoners.

Theo Blackwell said:

The new Chief Digital Officer post is an amazing opportunity to make our capital even more open to innovation, support jobs and investment and make our public services more effective. The pace of change over the next decade requires public services to develop a stronger relationship with the tech sector.  Our purpose is to fully harness London’s world-class potential to make our public services faster and more reliable at doing things we expect online, but also adaptable enough to overcome the capital’s most complex challenges.

Antony Walker, Deputy CEO of techUK, said:

techUK has long argued that London needed a Chief Digital Officer to ensure that London makes the best possible use of new digital technologies. The appointment of Theo Blackwell is good news for Londoners. The smart use of new digital technologies can improve the lives of people living in or visiting London. Theo Blackwell brings a deep understanding of both the opportunities ahead and the challenges of implementing new digital technologies to address the city’s most pressing problems. This appointment is an important step forward to London being at the forefront of tech innovation to create smart places and communities where citizens want to live, work and thrive.

Councillor Claire Kober, Chair of London Councils, said:

The appointment of London’s first Chief Digital Officer fills an important role providing needed digital leadership for London’s public services.  Theo will bring his longstanding experience working with other borough leaders, which I think is critical as we develop new approaches to developing, procuring and scaling the best digital solutions across the capital.

Robin Knowles, Founder and CEO of Digital Leaders, said:

Theo Blackwell has huge experience and is a fabulous appointment as the capital’s first Chief Digital Officer.  He will do a great job for London.

Doteveryone founder, Baroness Martha Lane Fox, said:

Digital leadership is a major challenge for the public sector, as the new Chief Digital Officer for London Theo’s track-record delivering real change in local government and his work in the tech sector brings real experience to this role.

Mike Flowers, First Chief Analytics Officer for New York City and Chief Analytics Officer at Enigma Technologies, said:

Theo is a pragmatic visionary with that rare combination of tech savvy and human focus that the task ahead of him requires. I congratulate Mayor Khan on his decision to trust him with this critical role, and I’m very happy for the residents of London whose lives will be improved by the better use of data and technology by their government. Theo gets results.

It’s always possible that there’s a mastermind involved in the timing of these announcements but sometimes they’re just a reflection of a trend. Cities have their moments just like people do and it seems like London may be on an upswing. From an August 18 (?), 2017 opinion piece by Gavin Poole (Chief Executive Officer, Here East) for ITProPortal,

Recently released data from London & Partners indicates that record levels of venture capital investment are flooding into the London tech sector, with a record £1.1 billion pounds being invested since the start of the year. Strikingly, 2017 has seen a fourfold increase in investment compared with 2013. This indicates that, despite Brexit fears, London retains its crown as Europe’s number one tech hub for global investors but we must make sure that we keep that place by protecting access to the world’s best talent.

As the tech sector continues to outperform the rest of the UK economy, London’s place in it will become all the more important. When London does well, so too does the rest of the UK. Mega-deals from challenger brands like Monzo and Improbable, and the recent opening of Europe’s newest technology innovation destination, Plexal, at Here East have helped to cement the tech sector’s future in the medium-term. Government too has recognised the strength of the sector; earlier this month the Department for Culture, Media and Sport rebranded as the Department for Digital, Culture, Media and Sport. This name change, 25 years after the department’s creation, signifies how much things have developed. There is now also a Minister of State for Digital who covers everything from broadband and mobile connectivity to the creative industries. This visible commitment by the Government to put digital at the heart of its agenda should be welcomed.

There are lots of reasons for London’s tech success: start-ups and major corporates look to London for its digital and geographical connectivity, the entrepreneurialism of its tech talent and the vibrancy of its urban life. We continue to lead Europe on all of these fronts and Sadiq Khan’s #LondonIsOpen campaign has made clear that the city remains welcoming and accessible. In fact, there’s no shortage of start-ups proclaiming the great things about London. Melissa Morris, CEO and Founder, Lantum, a company that recently secured £5.3 in funding in London said “London is the world’s coolest city – it attracts some of the most interesting people from across the world… We’ve just closed a round of funding, and our plans are very much about growth”.

As for Vancouver, we don’t have any science officers or technology officers or anything of that ilk. Our current mayor, Gregor Robertson, who pledged to reduce homelessness almost 10 years ago has experienced a resounding failure with regard to that pledge but his greenest city pledge has enjoyed more success. As far as I’m aware the mayor and the current city council remain blissfully uninvolved in major initiatives to encourage science and technology efforts although there was a ‘sweetheart’ real estate deal for local technology company, Hootsuite. A Feb. 18, 2014 news item on the CBC (Canadian Broadcasting Corporation) website provides a written description of the deal but there is also this video,

Robertson went on to win his election despite the hint of financial misdoings in the video but there is another election* coming in 2018. The city official in the video, Penny Ballem was terminated in September 2015 *due to what seemed to be her attempts to implement policy at a pace some found disconcerting*. In the meantime, the Liberal party which made up our provincial government until recently (July 2017) was excoriated for its eagerness to accept political money and pledged to ‘change the rules’ as did the parties which were running in opposition. As far as I’m aware, there have been no changes that will impace provincial or municipal politicians in the near future.

Getting back to government initiatives that encourage science and technology efforts in Vancouver, there is the Cascadia Innovation Corridor. Calling it governmental is a bit of a stretch as it seems to be a Microsoft initiative that found favour with the governments of Washington state and the province of British Columbia; Vancouver will be one of the happy recipients. See my Feb. 28, 2017 posting and August 28, 2017 posting for more details about the proposed Corridor.

In any event, I’d like to see a science policy and at this point I don’t care if it’s a city policy or a provincial policy.

*’elections’ corrected to ‘election’ and ‘due to what seemed to be her attempts to implement policy at a pace some found disconcerting’ added for clarity on August 31, 2017.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.

Emerging technology and the law

I have three news bits about legal issues that are arising as a consequence of emerging technologies.

Deep neural networks, art, and copyright

Caption: The rise of automated art opens new creative avenues, coupled with new problems for copyright protection. Credit: Provided by: Alexander Mordvintsev, Christopher Olah and Mike Tyka

Presumably this artwork is a demonstration of automated art although they never really do explain how in the news item/news release. An April 26, 2017 news item on ScienceDaily announces research into copyright and the latest in using neural networks to create art,

In 1968, sociologist Jean Baudrillard wrote on automatism that “contained within it is the dream of a dominated world […] that serves an inert and dreamy humanity.”

With the growing popularity of Deep Neural Networks (DNN’s), this dream is fast becoming a reality.

Dr. Jean-Marc Deltorn, researcher at the Centre d’études internationales de la propriété intellectuelle in Strasbourg, argues that we must remain a responsive and responsible force in this process of automation — not inert dominators. As he demonstrates in a recent Frontiers in Digital Humanities paper, the dream of automation demands a careful study of the legal problems linked to copyright.

An April 26, 2017 Frontiers (publishing) news release on EurekAlert, which originated the news item, describes the research in more detail,

For more than half a century, artists have looked to computational processes as a way of expanding their vision. DNN’s are the culmination of this cross-pollination: by learning to identify a complex number of patterns, they can generate new creations.

These systems are made up of complex algorithms modeled on the transmission of signals between neurons in the brain.

DNN creations rely in equal measure on human inputs and the non-human algorithmic networks that process them.

Inputs are fed into the system, which is layered. Each layer provides an opportunity for a more refined knowledge of the inputs (shape, color, lines). Neural networks compare actual outputs to expected ones, and correct the predictive error through repetition and optimization. They train their own pattern recognition, thereby optimizing their learning curve and producing increasingly accurate outputs.

The deeper the layers are, the higher the level of abstraction. The highest layers are able to identify the contents of a given input with reasonable accuracy, after extended periods of training.

Creation thus becomes increasingly automated through what Deltorn calls “the arcane traceries of deep architecture”. The results are sufficiently abstracted from their sources to produce original creations that have been exhibited in galleries, sold at auction and performed at concerts.

The originality of DNN’s is a combined product of technological automation on one hand, human inputs and decisions on the other.

DNN’s are gaining popularity. Various platforms (such as DeepDream) now allow internet users to generate their very own new creations . This popularization of the automation process calls for a comprehensive legal framework that ensures a creator’s economic and moral rights with regards to his work – copyright protection.

Form, originality and attribution are the three requirements for copyright. And while DNN creations satisfy the first of these three, the claim to originality and attribution will depend largely on a given country legislation and on the traceability of the human creator.

Legislation usually sets a low threshold to originality. As DNN creations could in theory be able to create an endless number of riffs on source materials, the uncurbed creation of original works could inflate the existing number of copyright protections.

Additionally, a small number of national copyright laws confers attribution to what UK legislation defines loosely as “the person by whom the arrangements necessary for the creation of the work are undertaken.” In the case of DNN’s, this could mean anybody from the programmer to the user of a DNN interface.

Combined with an overly supple take on originality, this view on attribution would further increase the number of copyrightable works.

The risk, in both cases, is that artists will be less willing to publish their own works, for fear of infringement of DNN copyright protections.

In order to promote creativity – one seminal aim of copyright protection – the issue must be limited to creations that manifest a personal voice “and not just the electric glint of a computational engine,” to quote Deltorn. A delicate act of discernment.

DNN’s promise new avenues of creative expression for artists – with potential caveats. Copyright protection – a “catalyst to creativity” – must be contained. Many of us gently bask in the glow of an increasingly automated form of technology. But if we want to safeguard the ineffable quality that defines much art, it might be a good idea to hone in more closely on the differences between the electric and the creative spark.

This research is and be will part of a broader Frontiers Research Topic collection of articles on Deep Learning and Digital Humanities.

Here’s a link to and a citation for the paper,

Deep Creations: Intellectual Property and the Automata by Jean-Marc Deltorn. Front. Digit. Humanit., 01 February 2017 | https://doi.org/10.3389/fdigh.2017.00003

This paper is open access.

Conference on governance of emerging technologies

I received an April 17, 2017 notice via email about this upcoming conference. Here’s more from the Fifth Annual Conference on Governance of Emerging Technologies: Law, Policy and Ethics webpage,

The Fifth Annual Conference on Governance of Emerging Technologies:

Law, Policy and Ethics held at the new

Beus Center for Law & Society in Phoenix, AZ

May 17-19, 2017!

Call for Abstracts – Now Closed

The conference will consist of plenary and session presentations and discussions on regulatory, governance, legal, policy, social and ethical aspects of emerging technologies, including (but not limited to) nanotechnology, synthetic biology, gene editing, biotechnology, genomics, personalized medicine, human enhancement technologies, telecommunications, information technologies, surveillance technologies, geoengineering, neuroscience, artificial intelligence, and robotics. The conference is premised on the belief that there is much to be learned and shared from and across the governance experience and proposals for these various emerging technologies.

Keynote Speakers:

Gillian HadfieldRichard L. and Antoinette Schamoi Kirtland Professor of Law and Professor of Economics USC [University of Southern California] Gould School of Law

Shobita Parthasarathy, Associate Professor of Public Policy and Women’s Studies, Director, Science, Technology, and Public Policy Program University of Michigan

Stuart Russell, Professor at [University of California] Berkeley, is a computer scientist known for his contributions to artificial intelligence

Craig Shank, Vice President for Corporate Standards Group in Microsoft’s Corporate, External and Legal Affairs (CELA)

Plenary Panels:

Innovation – Responsible and/or Permissionless

Ellen-Marie Forsberg, Senior Researcher/Research Manager at Oslo and Akershus University College of Applied Sciences

Adam Thierer, Senior Research Fellow with the Technology Policy Program at the Mercatus Center at George Mason University

Wendell Wallach, Consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics

 Gene Drives, Trade and International Regulations

Greg Kaebnick, Director, Editorial Department; Editor, Hastings Center Report; Research Scholar, Hastings Center

Jennifer Kuzma, Goodnight-North Carolina GlaxoSmithKline Foundation Distinguished Professor in Social Sciences in the School of Public and International Affairs (SPIA) and co-director of the Genetic Engineering and Society (GES) Center at North Carolina State University

Andrew Maynard, Senior Sustainability Scholar, Julie Ann Wrigley Global Institute of Sustainability Director, Risk Innovation Lab, School for the Future of Innovation in Society Professor, School for the Future of Innovation in Society, Arizona State University

Gary Marchant, Regents’ Professor of Law, Professor of Law Faculty Director and Faculty Fellow, Center for Law, Science & Innovation, Arizona State University

Marc Saner, Inaugural Director of the Institute for Science, Society and Policy, and Associate Professor, University of Ottawa Department of Geography

Big Data

Anupam Chander, Martin Luther King, Jr. Professor of Law and Director, California International Law Center, UC Davis School of Law

Pilar Ossorio, Professor of Law and Bioethics, University of Wisconsin, School of Law and School of Medicine and Public Health; Morgridge Institute for Research, Ethics Scholar-in-Residence

George Poste, Chief Scientist, Complex Adaptive Systems Initiative (CASI) (http://www.casi.asu.edu/), Regents’ Professor and Del E. Webb Chair in Health Innovation, Arizona State University

Emily Shuckburgh, climate scientist and deputy head of the Polar Oceans Team at the British Antarctic Survey, University of Cambridge

 Responsible Development of AI

Spring Berman, Ira A. Fulton Schools of Engineering, Arizona State University

John Havens, The IEEE [Institute of Electrical and Electronics Engineers] Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems

Subbarao Kambhampati, Senior Sustainability Scientist, Julie Ann Wrigley Global Institute of Sustainability, Professor, School of Computing, Informatics and Decision Systems Engineering, Ira A. Fulton Schools of Engineering, Arizona State University

Wendell Wallach, Consultant, Ethicist, and Scholar at Yale University’s Interdisciplinary Center for Bioethics

Existential and Catastrophic Ricks [sic]

Tony Barrett, Co-Founder and Director of Research of the Global Catastrophic Risk Institute

Haydn Belfield,  Academic Project Administrator, Centre for the Study of Existential Risk at the University of Cambridge

Margaret E. Kosal Associate Director, Sam Nunn School of International Affairs, Georgia Institute of Technology

Catherine Rhodes,  Academic Project Manager, Centre for the Study of Existential Risk at CSER, University of Cambridge

These were the panels that are of interest to me; there are others on the homepage.

Here’s some information from the Conference registration webpage,

Early Bird Registration – $50 off until May 1! Enter discount code: earlybirdGETs50

New: Group Discount – Register 2+ attendees together and receive an additional 20% off for all group members!

Click Here to Register!

Conference registration fees are as follows:

  • General (non-CLE) Registration: $150.00
  • CLE Registration: $350.00
  • *Current Student / ASU Law Alumni Registration: $50.00
  • ^Cybsersecurity sessions only (May 19): $100 CLE / $50 General / Free for students (registration info coming soon)

There you have it.

Neuro-techno future laws

I’m pretty sure this isn’t the first exploration of potential legal issues arising from research into neuroscience although it’s the first one I’ve stumbled across. From an April 25, 2017 news item on phys.org,

New human rights laws to prepare for advances in neurotechnology that put the ‘freedom of the mind’ at risk have been proposed today in the open access journal Life Sciences, Society and Policy.

The authors of the study suggest four new human rights laws could emerge in the near future to protect against exploitation and loss of privacy. The four laws are: the right to cognitive liberty, the right to mental privacy, the right to mental integrity and the right to psychological continuity.

An April 25, 2017 Biomed Central news release on EurekAlert, which originated the news item, describes the work in more detail,

Marcello Ienca, lead author and PhD student at the Institute for Biomedical Ethics at the University of Basel, said: “The mind is considered to be the last refuge of personal freedom and self-determination, but advances in neural engineering, brain imaging and neurotechnology put the freedom of the mind at risk. Our proposed laws would give people the right to refuse coercive and invasive neurotechnology, protect the privacy of data collected by neurotechnology, and protect the physical and psychological aspects of the mind from damage by the misuse of neurotechnology.”

Advances in neurotechnology, such as sophisticated brain imaging and the development of brain-computer interfaces, have led to these technologies moving away from a clinical setting and into the consumer domain. While these advances may be beneficial for individuals and society, there is a risk that the technology could be misused and create unprecedented threats to personal freedom.

Professor Roberto Andorno, co-author of the research, explained: “Brain imaging technology has already reached a point where there is discussion over its legitimacy in criminal court, for example as a tool for assessing criminal responsibility or even the risk of reoffending. Consumer companies are using brain imaging for ‘neuromarketing’, to understand consumer behaviour and elicit desired responses from customers. There are also tools such as ‘brain decoders’ which can turn brain imaging data into images, text or sound. All of these could pose a threat to personal freedom which we sought to address with the development of four new human rights laws.”

The authors explain that as neurotechnology improves and becomes commonplace, there is a risk that the technology could be hacked, allowing a third-party to ‘eavesdrop’ on someone’s mind. In the future, a brain-computer interface used to control consumer technology could put the user at risk of physical and psychological damage caused by a third-party attack on the technology. There are also ethical and legal concerns over the protection of data generated by these devices that need to be considered.

International human rights laws make no specific mention to neuroscience, although advances in biomedicine have become intertwined with laws, such as those concerning human genetic data. Similar to the historical trajectory of the genetic revolution, the authors state that the on-going neurorevolution will force a reconceptualization of human rights laws and even the creation of new ones.

Marcello Ienca added: “Science-fiction can teach us a lot about the potential threat of technology. Neurotechnology featured in famous stories has in some cases already become a reality, while others are inching ever closer, or exist as military and commercial prototypes. We need to be prepared to deal with the impact these technologies will have on our personal freedom.”

Here’s a link to and a citation for the paper,

Towards new human rights in the age of neuroscience and neurotechnology by Marcello Ienca and Roberto Andorno. Life Sciences, Society and Policy201713:5 DOI: 10.1186/s40504-017-0050-1 Published: 26 April 2017

©  The Author(s). 2017

This paper is open access.

Artificial pancreas in 2018?

According to Dr. Roman Hovorka and Dr. Hood Thabit of the University of Cambridge, UK, there will be an artificial pancreas assuming issues such as cybersecurity are resolved. From a June 30, 2016 Diabetologia press release on EurekAlert,

The artificial pancreas — a device which monitors blood glucose in patients with type 1 diabetes and then automatically adjusts levels of insulin entering the body — is likely to be available by 2018, conclude authors of a paper in Diabetologia (the journal of the European Association for the Study of Diabetes). Issues such as speed of action of the forms of insulin used, reliability, convenience and accuracy of glucose monitors plus cybersecurity to protect devices from hacking, are among the issues that are being addressed.

The press release describes the current technology available for diabetes type 1 patients and alternatives other than an artificial pancreas,

Currently available technology allows insulin pumps to deliver insulin to people with diabetes after taking a reading or readings from glucose meters, but these two components are separate. It is the joining together of both parts into a ‘closed loop’ that makes an artificial pancreas, explain authors Dr Roman Hovorka and Dr Hood Thabit of the University of Cambridge, UK. “In trials to date, users have been positive about how use of an artificial pancreas gives them ‘time off’ or a ‘holiday’ from their diabetes management, since the system is managing their blood sugar effectively without the need for constant monitoring by the user,” they say.

One part of the clinical need for the artificial pancreas is the variability of insulin requirements between and within individuals — on one day a person could use one third of their normal requirements, and on another 3 times what they normally would. This is dependent on the individual, their diet, their physical activity and other factors. The combination of all these factors together places a burden on people with type 1 diabetes to constantly monitor their glucose levels, to ensure they don’t end up with too much blood sugar (hyperglycaemic) or more commonly, too little (hypoglycaemic). Both of these complications can cause significant damage to blood vessels and nerve endings, making complications such as cardiovascular problems more likely.

There are alternatives to the artificial pancreas, with improvements in technology in both whole pancreas transplantation and also transplants of just the beta cells from the pancreas which produce insulin. However, recipients of these transplants require drugs to supress their immune systems just as in other organ transplants. In the case of whole pancreas transplantation, major surgery is required; and in beta cell islet transplantation, the body’s immune system can still attack the transplanted cells and kill off a large proportion of them (80% in some cases). The artificial pancreas of course avoids the need for major surgery and immunosuppressant drugs.

Researchers are working to solve one of the major problems with an artificial pancreas according to the press release,

Researchers globally continue to work on a number of challenges faced by artificial pancreas technology. One such challenge is that even fast-acting insulin analogues do not reach their peak levels in the bloodstream until 0.5 to 2 hours after injection, with their effects lasting 3 to 5 hours. So this may not be fast enough for effective control in, for example, conditions of vigorous exercise. Use of the even faster acting ‘insulin aspart’ analogue may remove part of this problem, as could use of other forms of insulin such as inhaled insulin. Work also continues to improve the software in closed loop systems to make it as accurate as possible in blood sugar management.

The press release also provides a brief outline of some of the studies being run on one artificial pancreas or another, offers an abbreviated timeline for when the medical device may be found on the market, and notes specific cybersecurity issues,

A number of clinical studies have been completed using the artificial pancreas in its various forms, in various settings such as diabetes camps for children, and real life home testing. Many of these trials have shown as good or better glucose control than existing technologies (with success defined by time spent in a target range of ideal blood glucose concentrations and reduced risk of hypoglycaemia). A number of other studies are ongoing. The authors say: “Prolonged 6- to 24-month multinational closed-loop clinical trials and pivotal studies are underway or in preparation including adults and children. As closed loop devices may be vulnerable to cybersecurity threats such as interference with wireless protocols and unauthorised data retrieval, implementation of secure communications protocols is a must.”

The actual timeline to availability of the artificial pancreas, as with other medical devices, encompasses regulatory approvals with reassuring attitudes of regulatory agencies such as the US Food and Drug Administration (FDA), which is currently reviewing one proposed artificial pancreas with approval possibly as soon as 2017. And a recent review by the UK National Institute of Health Research (NIHR) reported that automated closed-loop systems may be expected to appear in the (European) market by the end of 2018. The authors say: “This timeline will largely be dependent upon regulatory approvals and ensuring that infrastructures and support are in place for healthcare professionals providing clinical care. Structured education will need to continue to augment efficacy and safety.”

The authors say: “Cost-effectiveness of closed-loop is to be determined to support access and reimbursement. In addition to conventional endpoints such as blood sugar control, quality of life is to be included to assess burden of disease management and hypoglycaemia. Future research may include finding out which sub-populations may benefit most from using an artificial pancreas. Research is underway to evaluate these closed-loop systems in the very young, in pregnant women with type 1 diabetes, and in hospital in-patients who are suffering episodes of hyperglycaemia.”

They conclude: “Significant milestones moving the artificial pancreas from laboratory to free-living unsupervised home settings have been achieved in the past decade. Through inter-disciplinary collaboration, teams worldwide have accelerated progress and real-world closed-loop applications have been demonstrated. Given the challenges of beta-cell transplantation, closed-loop technologies are, with continuing innovation potential, destined to provide a viable alternative for existing insulin pump therapy and multiple daily insulin injections.”

Here’s a link to and a citation for the paper,

Coming of age: the artificial pancreas for type 1 diabetes by Hood Thabit, Roman Hovorka. Diabetologia (2016). doi:10.1007/s00125-016-4022-4 First Online: 30 June 2016

This is an open access paper.

Nanotechnology and cybersecurity risks

Gregory Carpenter has written a gripping (albeit somewhat exaggerated) piece for Signal, a publication of the  Armed Forces Communications and Electronics Association (AFCEA) about cybersecurity issues and  nanomedicine endeavours. From Carpenter’s Jan. 1, 2016 article titled, When Lifesaving Technology Can Kill; The Cyber Edge,

The exciting advent of nanotechnology that has inspired disruptive and lifesaving medical advances is plagued by cybersecurity issues that could result in the deaths of people that these very same breakthroughs seek to heal. Unfortunately, nanorobotic technology has suffered from the same security oversights that afflict most other research and development programs.

Nanorobots, or small machines [or nanobots[, are vulnerable to exploitation just like other devices.

At the moment, the issue of cybersecurity exploitation is secondary to making nanobots, or nanorobots, dependably functional. As far as I’m aware, there is no such nanobot. Even nanoparticles meant to function as packages for drug delivery have not been perfected (see one of the controversies with nanomedicine drug delivery described in my Nov. 26, 2015 posting).

That said, Carpenter’s point about cybersecurity is well taken since security features are often overlooked in new technology. For example, automated banking machines (ABMs) had woefully poor (inadequate, almost nonexistent) security when they were first introduced.

Carpenter outlines some of the problems that could occur, assuming some of the latest research could be reliably  brought to market,

The U.S. military has joined the fray of nanorobotic experimentation, embarking on revolutionary research that could lead to a range of discoveries, from unraveling the secrets of how brains function to figuring out how to permanently purge bad memories. Academia is making amazing advances as well. Harnessing progress by Harvard scientists to move nanorobots within humans, researchers at the University of Montreal, Polytechnique Montreal and Centre Hospitalier Universitaire Sainte-Justine are using mobile nanoparticles inside the human brain to open the blood-brain barrier, which protects the brain from toxins found in the circulatory system.

A different type of technology presents a risk similar to the nanoparticles scenario. A DARPA-funded program known as Restoring Active Memory (RAM) addresses post-traumatic stress disorder, attempting to overcome memory deficits by developing neuroprosthetics that bridge gaps in an injured brain. In short, scientists can wipe out a traumatic memory, and they hope to insert a new one—one the person has never actually experienced. Someone could relish the memory of a stroll along the French Riviera rather than a terrible firefight, even if he or she has never visited Europe.

As an individual receives a disruptive memory, a cyber criminal could manage to hack the controls. Breaches of the brain could become a reality, putting humans at risk of becoming zombie hosts [emphasis mine] for future virus deployments. …

At this point, the ‘zombie’ scenario Carpenter suggests seems a bit over-the-top but it does hearken to the roots of the zombie myth where the undead aren’t mindlessly searching for brains but are humans whose wills have been overcome. Mike Mariani in an Oct. 28, 2015 article for The Atlantic has presented a thought-provoking history of zombies,

… the zombie myth is far older and more rooted in history than the blinkered arc of American pop culture suggests. It first appeared in Haiti in the 17th and 18th centuries, when the country was known as Saint-Domingue and ruled by France, which hauled in African slaves to work on sugar plantations. Slavery in Saint-Domingue under the French was extremely brutal: Half of the slaves brought in from Africa were worked to death within a few years, which only led to the capture and import of more. In the hundreds of years since, the zombie myth has been widely appropriated by American pop culture in a way that whitewashes its origins—and turns the undead into a platform for escapist fantasy.

The original brains-eating fiend was a slave not to the flesh of others but to his own. The zombie archetype, as it appeared in Haiti and mirrored the inhumanity that existed there from 1625 to around 1800, was a projection of the African slaves’ relentless misery and subjugation. Haitian slaves believed that dying would release them back to lan guinée, literally Guinea, or Africa in general, a kind of afterlife where they could be free. Though suicide was common among slaves, those who took their own lives wouldn’t be allowed to return to lan guinée. Instead, they’d be condemned to skulk the Hispaniola plantations for eternity, an undead slave at once denied their own bodies and yet trapped inside them—a soulless zombie.

I recommend reading Mariani’s article although I do have one nit to pick. I can’t find a reference to brain-eating zombies until George Romero’s introduction of the concept in his movies. This Zombie Wikipedia entry seems to be in agreement with my understanding (if I’m wrong, please do let me know and, if possible, provide a link to the corrective text).

Getting back to Carpenter and cybersecurity with regard to nanomedicine, while his scenarios may seem a trifle extreme it’s precisely the kind of thinking you need when attempting to anticipate problems. I do wish he’d made clear that the technology still has a ways to go.

Digital life in Estonia and the National Film Board of Canada’s ‘reclaim control of your online identity’ series

Internet access is considered a human right in Estonia (according to a July 1, 2008 story by Colin Woodard for the Christian Science Monitor). That commitment has led to some very interesting developments in Estonia which are being noticed internationally. The Woodrow Wilson International Center for Scholars (Wilson Center) is hosting the president of Estonia, Toomas Hendrik Ilves at an April 21, 2015 event (from the April 15, 2015 event invitation),

The Estonia Model: Why a Free and Secure Internet Matters
After regaining independence in 1991, the Republic of Estonia built a new government from the ground up. The result was the world’s most comprehensive and efficient ‘e-government’: a digital administration with online IDs for every citizen, empowered by a free nationwide Wi-Fi network and a successful school program–called Tiger Leap–that boosts tech competence at every age level. While most nations still struggle to provide comprehensive Internet access, Estonia has made major progress towards a strong digital economy, along with robust protections for citizen rights. E-government services have made Estonia one of the world’s most attractive environments for tech firms and start-ups, incubating online powerhouses like Skype and Transferwise.

An early adopter of information technology, Estonia was also one of the first victims of a cyber attack. In 2007, large-scale Distributed Denial of Service attacks took place, mostly against government websites and financial services. The damages of these attacks were not remarkable, but they did give the country’s security experts  valuable experience and information in dealing with such incidents. Eight years on, the Wilson Center is pleased to welcome Estonia’s President Toomas Hendrik Ilves for a keynote address on the state of cybersecurity, privacy, and the digital economy. [emphasis mine]

Introduction
The Honorable Jane Harman
Director, President and CEO, The Wilson Center

Keynote
His Excellency Toomas Hendrik Ilves
President of the Republic of Estonia

The event is being held in Washington, DC from 1 – 2 pm EST on April 21, 2015. There does not seem to be a webcast option for viewing the presentation online (a little ironic, non?). You can register here, should you be able to attend.

I did find a little more information about Estonia and its digital adventures, much of it focused on digital economy, in an Oct. 8, 2014 article by Lily Hay Newman for Slate,

Estonia is planning to be the first country to offer a status called e-residency. The program’s website says, “You can become an e-Estonian!” …

The website says that anyone can apply to become an e-resident and receive an e-Estonian online identity “in order to get secure access to world-leading digital services from wherever you might be.” …

You can’t deny that the program has a compelling marketing pitch, though. It’s “for anybody who wants to run their business and life in the most convenient aka digital way!”

You can find the Estonian e-residency website here. There’s also a brochure describing the benefits,

It is especially useful for entrepreneurs and others who already have some relationship to Estonia: who do business, work, study or visit here but have not become a resident. However, e-residency is also launched as a platform to offer digital services to a global audience with no prior Estonian affiliation – for  anybody  who  wants  to  run their  business  and  life in  the  most convenient aka digital way! We plan to keep adding new useful services from early 2015 onwards.

I also found an Oct. 31, 2013 blog post by Peter Herlihy on the gov.uk website for the UK’s Government Digital Service (GDS). Herlihy offers the perspective of a government bureaucrat (Note: A link has been removed),

I’ve just got back from a few days in the Republic of Estonia, looking at how they deliver their digital services and sharing stories of some of the work we are up to here in the UK. We have an ongoing agreement with the Estonian government to work together and share knowledge and expertise, and that is what brought me to the beautiful city of Tallinn.

I knew they were digitally sophisticated. But even so, I wasn’t remotely prepared for what I learned.

Estonia has probably the most joined up digital government in the world. Its citizens can complete just about every municipal or state service online and in minutes. You can formally register a company and start trading within 18 minutes, all of it from a coffee shop in the town square. You can view your educational record, medical record, address, employment history and traffic offences online – and even change things that are wrong (or at least directly request changes). The citizen is in control of their data.

So we should do whatever they’re doing then, right? Well, maybe. …

National Film Board of Canada

There’s a new series being debuted this week about reclaiming control of your life online and titled: Do Not Track according to an April 14, 2015 post on the National Film Board of Canada (NFB) blog (Note: Links have been removed),

An eye-opening personalized look at how online data is being tracked and sold.

Starting April 14 [2015], the online interactive documentary series Do Not Track will show you just how much the web knows about you―and the results may astonish you.

Conceived and directed by acclaimed Canadian documentary filmmaker and web producer Brett Gaylor, the 7-part series Do Not Track is an eye-opening look at how online behaviour is being tracked, analyzed and sold―an issue affecting each of us, and billions of web users around the world.

Created with the goal of helping users learn how to take back control of their digital identity, Do Not Track goes beyond a traditional documentary film experience: viewers who agree to share their personal data are offered an astounding real-time look at how their online ID is being tracked.

Do Not Track is a collective investigation, bringing together public media broadcasters, writers, developers, thinkers and independent media makers, including Gaylor, Vincent Glad, Zineb Dryef, Richard Gutjahr, Sandra Rodriguez, Virginie Raisson and the digital studio Akufen.

Do Not Track episodes launch every 2 weeks, from April 14 to June 9, 2015, in English, French and German. Roughly 7 minutes in length, each episode has a different focus―from our mobile phones to social networks, targeted advertising to big data with a different voice and a different look, all coupled with sharp and varied humour. Episodes are designed to be clear and accessible to all.

You can find Do Not Track here, episode descriptions from the April 14, 2015 posting,

April 14 | Episode 1: Morning Rituals
This episode introduces viewers to Brett Gaylor and offers a call to action: let’s track the trackers together.

Written and directed by Brett Gaylor

Interviews: danah boyd, principal researcher, Microsoft Research; Nathan Freitas, founder, and Harlo Holmes, software developer, The Guardian Project; Ethan Zuckerman, director, MIT Center for Civic Media*

April 14 | Episode 2: Breaking Ad
We meet the man who invented the Internet pop-up ad―and a woman who’s spent nearly a decade reporting on the web’s original sin: advertising.

Directed by Brett Gaylor | Written by Vincent Glad

Interviews: Ethan Zuckerman; Julia Angwin, journalist and author of Dragnet Nation: A Quest for Privacy, Security, and Freedom in a World of Relentless Surveillance*

April 28 | Episode 3: The Harmless Data We Leave on Social Media
This episode reveals how users can be tracked from Facebook activity and how far-reaching the data trail is.

Directed by Brett Gaylor | Written by Sandra Marsh | Hosted by Richard Gutjahr

Interviews: Constanze Kurz, writer and computer scientist, Chaos Computer Club

May 12 | Episode 4: Your Mobile Phone, the Spy
Your smartphone is spying on you—where does all this data go, what becomes of it, and how is it used?

Directed by Brett Gaylor | Written and hosted by Zineb Dryef

Interviews: Harlo Holmes; Rand Hindi, data scientist and founder of Snips*

May 26 | Episode 5: Big Data and Its Algorithms
There’s an astronomical quantity of data that may or may not be used against us. Based on the information collected since the start of this documentary, users discover the algorithmic interpretation game and its absurdity.

Directed by Sandra Rodriguez and Akufen | Written by Sandra Rodriguez

Interviews: Kate Crawford, principal researcher, Microsoft Research New York City; Matthieu Dejardins, e-commerce entrepreneur and CEO, NextUser; Tyler Vigen, founder, Spurious Correlations, and Joint Degree Candidate, Harvard Law School; Cory Doctorow, science fiction novelist, blogger and technology activist; Alicia Garza, community organizer and co-founder, #BlackLivesMatter; Yves-Alexandre De Montjoye, computational privacy researcher, Massachusetts Institute of Technology Media Lab*

June 9 | Episode 6: Filter Bubble
The Internet uses filters based on your browsing history, narrowing down the information you get―until you’re painted into a digital corner.

Written and directed by Brett Gaylor*

June 9 | Episode 7:  The Future of Tracking
Choosing to protect our privacy online today will dramatically shape our digital future. What are our options?

Directed by Brett Gaylor | Written by Virginie Raisson

Interviews: Cory Doctorow

Enjoy!