Tag Archives: US White House

Local resistance to Lomiko Metals’ Outaouais graphite mine

It’s been a while since BC-based Lomiko Metals has rated more than a passing mention here. Back in June 2024 the company experienced a rough patch regarding their plans to mine for graphite in one of their Québec mines, from a June 9, 2024 article by Joe Bongiorno for Canadian Broadcasting Corporation (CBC) news online,

In Quebec’s Laurentians region, a few kilometres from a wildlife reserve and just outside the town of Duhamel, lies a source of one of the world’s most sought after minerals for manufacturing electric vehicle batteries: graphite.

Since Lomiko Metals Inc., a mining company based in Surrey, B.C., announced plans to build a graphite mine in the area, some residents living nearby have protested the project, fearing the potential harm to the environment.

But opposition has only gained steam after locals found out last month that the [US] Pentagon is involved in the project.

In May, Lomiko announced it received a grant of $11.4 million from the U.S. Department of Defence and another $4.9 million from Natural Resources Canada to study the conversion of graphite into battery-grade material for powering electric vehicles.

In its own announcement, the Pentagon said Lomiko’s graphite will bolster North American energy supply chains and be used for “defence applications,” words that make Duhamel resident Louis Saint-Hilaire uneasy.

Depending on how you view things, this is either good news for bad news in a September 17, 2024 news item on CBC news online, Note: Links have been removed,

Two Quebec cabinet ministers say the province will not fund a proposed graphite mine north of Gatineau because it doesn’t meet the government’s standards for local support.

B.C.-based Lomiko Metals has been testing samples from its La Loutre site near the town of Duhamel, which the company says on its project website has shown “excellent graphite properties” for making batteries.

Many nearby residents have been against the proposal for years due to a perceived threat to outdoor recreation and associated businesses. No environmental assessment of the site has been conducted.

La Loutre has drawn funding from the Canadian and American governments for its potential role in the switch from gas to electric vehicles and related drop in fossil fuel emissions, but Minister Responsible for the Outaouais Region Mathieu Lacombe said Monday [Sept4ember 16, 2024] the project lacks provincial support.

Lacombe pointed to Premier François Legault indicating in 2022 that no mining project will be carried out without what’s referred to in the province as “social acceptability” — essentially, buy-in from affected communities.

Natural Resources Minister Blanchette Vézina said the company’s request for funding from Investissement Québec wouldn’t be successful because it lacks public support.

Lomiko Metals has not responded to requests from Radio-Canada for an interview. It’s not clear what the company will do next, or what will happen with a referendum on the project scheduled for November 2025.

Embedded in the September 17, 2024 news item is a radio segment where an expert further dissects the implications of the news.

For anyone interested in graphite, I have a January 3, 2023 posting, “Making graphite from coal and a few graphite facts.” There have been some changes with the ‘graphite facts’ since the posting was published but most of the other information should still be valid.

Here are the updated facts from the Natural Resources Canada Graphite Facts webpage, which was updated March 1, 2024,

Graphite is a non-metallic mineral that has properties similar to metals, such as a good ability to conduct heat and electricity. Graphite occurs naturally or can be produced synthetically. Purified natural graphite has higher crystalline structure and offers better electrical and thermal conductivity than synthetic material.

Key facts

  • In 2022, global graphite mine production was about 1.3 million tonnes, a 15% increase from 2021.
  • Canadian natural graphite production comes from the Lac des Iles mine in Quebec.
  • Canada ranks as the sixth global producer of graphite with 13,000 tonnes of production in 2022.
  • Canada exported $22 million worth of natural graphite and $14 million worth of synthetic graphite globally in 2022, mostly to the United States.

Production

The Lac des Iles mine in Quebec is the only mine in Canada that produced graphite in 2022 [emphasis mine]. However, many other companies are working on advancing graphite projects. Canada produced 13,000 tonnes of natural graphite in 2022, which was an increase from 2021 of 9,743 tonnes.

International context

Global production and demand for graphite are anticipated to increase in the coming years, largely because of the use of graphite in the batteries of electric vehicles. In 2022, global consumption of graphite reached 3.8 million tonnes, compared to 3.6 million tonnes in 2021. Synthetic graphite accounted for about 56% of the graphite consumption, which was concentrated largely in Asia. North America consumes only 1% of global natural graphite, but almost 9% of synthetic graphite.

Global mine production of graphite was 1.3 million tonnes in 2022, up 15% compared to the previous year. China is the leading global producer, accounting for 66% of production in 2022. Canada ranks sixth globally for natural graphite production, producing about 1% of global natural graphite.

It seems Lomiko Metals’ La Loutre mine will not be adding to the country’s graphite production. I wonder what the company will do now as that La Loutre mine appears to be its chief asset, from a November 23, 2023 news release, Note: A link has been removed,

Montreal, Quebec – November 23, 2023 – Lomiko Metals Inc. (TSX.V: LMR) (“Lomiko Metals” or the “Company”) is pleased to announce the launch of a private placement (the “Private Placement“) to support the Company’s progress with its graphite and lithium projects in Quebec, Canada. The Private Placement will consist of hard dollar units for gross proceeds of up to $500,000.

Belinda Labatte, CEO and Director of Lomiko Metals: “Lomiko has accomplished many milestones in the last 18 months, including an updated Mineral Resource Estimate for La Loutre, environmental baseline studies and advancing the metallurgical studies. With this financing and committed investors, we will advance pre-feasibility level initiatives, and continue to advance the important discussions with communities, partners and First Nation Kitigan Zibi.”

Retirement of Director

A special thank you and note of appreciation for Paul Gill, Executive Chair, who will not stand for re-election as he pursues other opportunities. We appreciate his service to the company and long-standing leadership at Lomiko. We wish him well in his future endeavours. Paul Gill will continue to serve as Executive Chair until the Company’s Annual and Special Meeting on December 20, 2023.

About Lomiko Metals Inc.

The Company holds mineral interests in its La Loutre graphite development in southern Quebec. The La Loutre project site is within the Kitigan Zibi Anishinabeg (KZA) First Nation’s territory. The KZA First Nation is part of the Algonquin Nation, and the KZA traditional territory is situated within the Outaouais and Laurentides regions.​ Located 180 kilometers northwest of Montreal, the property consists of one large, continuous block with 76 mineral claims totaling 4,528 hectares (45.3 km2).

In addition to La Loutre, Lomiko is working with Critical Elements Lithium Corporation towards earning its 49% stake in the Bourier Project as per the option agreement announced on April 27th, 2021. The Bourier project site is located near Nemaska Lithium and Critical Elements south-east of the Eeyou Istchee James Bay territory in Quebec which consists of 203 claims, for a total ground position of 10,252.20 hectares (102.52 km2), in Canada’s lithium triangle near the James Bay region of Quebec that has historically housed lithium deposits and mineralization trends.

This is quite a setback for Lomiko Metals.

October 2024

It seems that while the company has regrouped it has entirely given up on La Loutre, from an October 30, 2024 news release,

October 30th, 2024 – Montreal, Québec: Lomiko Metals Inc. (TSX.V: LMR) (“Lomiko Metals” or the “Company”) is pleased to announce that the 2024 Beep-Map prospecting and sampling program is well underway on the Grenville Graphite Mineral Belt regional graphite exploration project.  The “Grenville” project includes 268 mineral claims covering 15,639 hectares on six blocks in the Laurentian region of Quebec, approximately 200 kilometers northwest of Montréal within a 100 km radius of the Company’s flagship La Loutre graphite project [emphasis mine].  The 2024 work is focused on following up on the very successful graphite results reported in the Company’s press release dated July 11, 2023.  To date, a total of 265 samples have been collected and submitted for analysis from the Dieppe, Meloche, Ruisseau and Tremblant properties, the focus of this campaign. No work is being conducted on the Carmin or North Low properties at this time.  The results of the exploration campaign will be reported as they become available.  The regional exploration program focuses on improving knowledge of graphite showings at the most prospective targets outlined in the 2022 and 2023 exploration programs.

Corporate and market update

Lomiko is part of the global transition to electrification and localization of transportation supply chains, a change that impacts all forms of transportation, cars, heavy equipment, marine etc. It also impacts communities and our talent pool to build these businesses of the future. Natural flake graphite, and specifically fine flake graphite, is crucial for the development of the North American anode industry in the new energy framework driven by tariffs on critical minerals, long-term supply chain resilience, and responsible domestic industrial growth. The La Loutre graphite is 67% fine flake distribution, making it an important source of long-term future graphite supply [emphasis mine] with demonstrated success for anode battery technology – among other uses currently being evaluated by Lomiko. According to Fortune Business Insights report dated October 14, 2024, the North American EV market is expected to grow almost quadruple to $230 billion in 2030 from $63 billion in 2022, with growth from other transportation sectors still nascent. Lomiko continues to engage with partners, customers and suppliers in building the future of this industry and developing R&D for the responsible extraction of this material.

Lomiko is initiating the reimbursement process for its recently awarded grant from the United States government and contribution agreement from the Canadian government, for work completed to date and within the scope of the agreements. It is the recipient of a Department of Defense (“DoD”) Technology Investment Agreement (“TIA”) grant of US$8.35 million (approximately CA$11.4 million) where Lomiko will match the funding over a period of 5 years, for a total agreement with the DoD of US$16.7 million. The grant falls under Title III of the Defense Production Act and is funded through the Inflation Reduction Act to ensure energy security in North America. The Company has also been approved for funding of CA$4.9 million in a non-repayable contribution agreement from the Critical Mineral Research, Development and Demonstration (CMRDD) program administered by Natural Resources Canada, with the total project cost being CA$6.6 million. The announcement was made on May 16, 2024 and can be viewed on our website at www.lomiko.com.

In addition, Lomiko announces the resignation of CFO and Corporate Secretary, Vince Osbourne, who will be pursuing a role with a private company and maintain a strategic advisory role with Lomiko going forward. Jacqueline Michael, Controller, will replace Vince Osbourne as CFO on an interim basis, with the role of Corporate Secretary to be assumed by current professionals working with Lomiko.

On behalf of the board of directors and management, Belinda Labatte, CEO and Interim Chair of the board of directors stated: “Vince has been an integral member of the Lomiko team, and we wish him success in his future endeavors, and we are pleased to continue our working relationship in his new capacity to Lomiko as advisor to the Company.”

Now with a new administration entering the US White House has a chief advisor and co-leader of a new government agency [Department of Government Efficiency] in Elon Musk who is extremely wealthy and has many businesses, notably Tesla, an electronic vehicle (EV) business. It would seem that M. Musk might have an interest in easy access to minerals important to Tesla’s business.

I wonder how this is going to work out.

Hardware policies best way to manage AI safety?

Regulation of artificial intelligence (AI) has become very topical in the last couple of years. There was an AI safety summit in November 2023 at Bletchley Park in the UK (see my November 2, 2023 posting for more about that international meeting).

A very software approach?

This year (2024) has seen a rise in legislative and proposed legislative activity. I have some articles on a few of these activities. China was the first to enact regulations of any kind on AI according to Matt Sheehan’s February 27, 2024 paper for the Carnegie Endowment for International Peace,

In 2021 and 2022, China became the first country to implement detailed, binding regulations on some of the most common applications of artificial intelligence (AI). These rules formed the foundation of China’s emerging AI governance regime, an evolving policy architecture that will affect everything from frontier AI research to the functioning of the world’s second-largest economy, from large language models in Africa to autonomous vehicles in Europe.

The Chinese Communist Party (CCP) and the Chinese government started that process with the 2021 rules on recommendation algorithms, an omnipresent use of the technology that is often overlooked in international AI governance discourse. Those rules imposed new obligations on companies to intervene in content recommendations, granted new rights to users being recommended content, and offered protections to gig workers subject to algorithmic scheduling. The Chinese party-state quickly followed up with a new regulation on “deep synthesis,” the use of AI to generate synthetic media such as deepfakes. Those rules required AI providers to watermark AI-generated content and ensure that content does not violate people’s “likeness rights” or harm the “nation’s image.” Together, these two regulations also created and amended China’s algorithm registry, a regulatory tool that would evolve into a cornerstone of the country’s AI governance regime.

The UK has adopted a more generalized approach focused on encouraging innovation according to Valeria Gallo’s and Suchitra Nair’s February 21, 2024 article for Deloitte (a British professional services firm also considered one of the big four accounting firms worldwide),

At a glance

The UK Government has adopted a cross-sector and outcome-based framework for regulating AI, underpinned by five core principles. These are safety, security and robustness, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress.

Regulators will implement the framework in their sectors/domains by applying existing laws and issuing supplementary regulatory guidance. Selected regulators will publish their AI annual strategic plans by 30th April [2024], providing businesses with much-needed direction.

Voluntary safety and transparency measures for developers of highly capable AI models and systems will also supplement the framework and the activities of individual regulators.

The framework will not be codified into law for now, but the Government anticipates the need for targeted legislative interventions in the future. These interventions will address gaps in the current regulatory framework, particularly regarding the risks posed by complex General Purpose AI and the key players involved in its development.

Organisations must prepare for increased AI regulatory activity over the next year, including guidelines, information gathering, and enforcement. International firms will inevitably have to navigate regulatory divergence.

While most of the focus appears to be on the software (e.g., General Purpose AI), the UK framework does not preclude hardware.

The European Union (EU) is preparing to pass its own AI regulation act through the European Parliament in 2024 according to a December 19, 2023 “EU AI Act: first regulation on artificial intelligence” article update, Note: Links have been removed,

As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation.

The agreed text is expected to be finally adopted in April 2024. It will be fully applicable 24 months after entry into force, but some parts will be applicable sooner:

*The ban of AI systems posing unacceptable risks will apply six months after the entry into force

*Codes of practice will apply nine months after entry into force

*Rules on general-purpose AI systems that need to comply with transparency requirements will apply 12 months after the entry into force

High-risk systems will have more time to comply with the requirements as the obligations concerning them will become applicable 36 months after the entry into force.

This EU initiative, like the UK framework, seems largely focused on AI software and according to the Wikipedia entry “Regulation of artificial intelligence,”

… The AI Act is expected to come into effect in late 2025 or early 2026.[109

I do have a few postings about Canadian regulatory efforts, which also seem to be focused on software but don’t preclude hardware. While the January 20, 2024 posting is titled “Canada’s voluntary code of conduct relating to advanced generative AI (artificial intelligence) systems,” information about legislative efforts is also included although you might find my May 1, 2023 posting titled “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27)” offers more comprehensive information about Canada’s legislative progress or lack thereof.

The US is always to be considered in these matters and I have a November 2023 ‘briefing’ by Müge Fazlioglu on the International Association of Privacy Professionals (IAPP) website where she provides a quick overview of the international scene before diving deeper into US AI governance policy through the Barack Obama, Donald Trump, and Joe Biden administrations. There’s also this January 29, 2024 US White House “Fact Sheet: Biden-⁠Harris Administration Announces Key AI Actions Following President Biden’s Landmark Executive Order.”

What about AI and hardware?

A February 15, 2024 news item on ScienceDaily suggests that regulating hardware may be the most effective way of regulating AI,

Chips and datacentres — the ‘compute’ power driving the AI revolution — may be the most effective targets for risk-reducing AI policies as they have to be physically possessed, according to a new report.

A global registry tracking the flow of chips destined for AI supercomputers is one of the policy options highlighted by a major new report calling for regulation of “compute” — the hardware that underpins all AI — to help prevent artificial intelligence misuse and disasters.

Other technical proposals floated by the report include “compute caps” — built-in limits to the number of chips each AI chip can connect with — and distributing a “start switch” for AI training across multiple parties to allow for a digital veto of risky AI before it feeds on data.

The experts point out that powerful computing chips required to drive generative AI models are constructed via highly concentrated supply chains, dominated by just a handful of companies — making the hardware itself a strong intervention point for risk-reducing AI policies.

The report, published 14 February [2024], is authored by nineteen experts and co-led by three University of Cambridge institutes — the Leverhulme Centre for the Future of Intelligence (LCFI), the Centre for the Study of Existential Risk (CSER) and the Bennett Institute for Public Policy — along with OpenAI and the Centre for the Governance of AI.

A February 14, 2024 University of Cambridge press release by Fred Lewsey (also on EurekAlert), which originated the news item, provides more information about the ‘hardware approach to AI regulation’,

“Artificial intelligence has made startling progress in the last decade, much of which has been enabled by the sharp increase in computing power applied to training algorithms,” said Haydn Belfield, a co-lead author of the report from Cambridge’s LCFI. 

“Governments are rightly concerned about the potential consequences of AI, and looking at how to regulate the technology, but data and algorithms are intangible and difficult to control.

“AI supercomputers consist of tens of thousands of networked AI chips hosted in giant data centres often the size of several football fields, consuming dozens of megawatts of power,” said Belfield.

“Computing hardware is visible, quantifiable, and its physical nature means restrictions can be imposed in a way that might soon be nearly impossible with more virtual elements of AI.”

The computing power behind AI has grown exponentially since the “deep learning era” kicked off in earnest, with the amount of “compute” used to train the largest AI models doubling around every six months since 2010. The biggest AI models now use 350 million times more compute than thirteen years ago.

Government efforts across the world over the past year – including the US Executive Order on AI, EU AI Act, China’s Generative AI Regulation, and the UK’s AI Safety Institute – have begun to focus on compute when considering AI governance.

Outside of China, the cloud compute market is dominated by three companies, termed “hyperscalers”: Amazon, Microsoft, and Google. “Monitoring the hardware would greatly help competition authorities in keeping in check the market power of the biggest tech companies, and so opening the space for more innovation and new entrants,” said co-author Prof Diane Coyle from Cambridge’s Bennett Institute. 

The report provides “sketches” of possible directions for compute governance, highlighting the analogy between AI training and uranium enrichment. “International regulation of nuclear supplies focuses on a vital input that has to go through a lengthy, difficult and expensive process,” said Belfield. “A focus on compute would allow AI regulation to do the same.”

Policy ideas are divided into three camps: increasing the global visibility of AI computing; allocating compute resources for the greatest benefit to society; enforcing restrictions on computing power.

For example, a regularly-audited international AI chip registry requiring chip producers, sellers, and resellers to report all transfers would provide precise information on the amount of compute possessed by nations and corporations at any one time.

The report even suggests a unique identifier could be added to each chip to prevent industrial espionage and “chip smuggling”.

“Governments already track many economic transactions, so it makes sense to increase monitoring of a commodity as rare and powerful as an advanced AI chip,” said Belfield. However, the team point out that such approaches could lead to a black market in untraceable “ghost chips”.

Other suggestions to increase visibility – and accountability – include reporting of large-scale AI training by cloud computing providers, and privacy-preserving “workload monitoring” to help prevent an arms race if massive compute investments are made without enough transparency.  

“Users of compute will engage in a mixture of beneficial, benign and harmful activities, and determined groups will find ways to circumvent restrictions,” said Belfield. “Regulators will need to create checks and balances that thwart malicious or misguided uses of AI computing.”

These might include physical limits on chip-to-chip networking, or cryptographic technology that allows for remote disabling of AI chips in extreme circumstances. One suggested approach would require the consent of multiple parties to unlock AI compute for particularly risky training runs, a mechanism familiar from nuclear weapons.

AI risk mitigation policies might see compute prioritised for research most likely to benefit society – from green energy to health and education. This could even take the form of major international AI “megaprojects” that tackle global issues by pooling compute resources.

The report’s authors are clear that their policy suggestions are “exploratory” rather than fully fledged proposals and that they all carry potential downsides, from risks of proprietary data leaks to negative economic impacts and the hampering of positive AI development.

They offer five considerations for regulating AI through compute, including the exclusion of small-scale and non-AI computing, regular revisiting of compute thresholds, and a focus on privacy preservation.

Added Belfield: “Trying to govern AI models as they are deployed could prove futile, like chasing shadows. Those seeking to establish AI regulation should look upstream to compute, the source of the power driving the AI revolution. If compute remains ungoverned it poses severe risks to society.”

You can find the report, “Computing Power and the Governance of Artificial Intelligence” on the University of Cambridge’s Centre for the Study of Existential Risk.

Authors include: Girish Sastry, Lennart Heim, Haydn Belfield, Markus Anderljung, Miles Brundage, Julian Hazell, Cullen O’Keefe, Gillian K. Hadfield, Richard Ngo, Konstantin Pilz, George Gor, Emma Bluemke, Sarah Shoker, Janet Egan, Robert F. Trager, Shahar Avin, Adrian Weller, Yoshua Bengio, and Diane Coyle.

The authors are associated with these companies/agencies: OpenAI, Centre for the Governance of AI (GovAI), Leverhulme Centre for the Future of Intelligence at the Uni. of Cambridge, Oxford Internet Institute, Institute for Law & AI, University of Toronto Vector Institute for AI, Georgetown University, ILINA Program, Harvard Kennedy School (of Government), *AI Governance Institute,* Uni. of Oxford, Centre for the Study of Existential Risk at Uni. of Cambridge, Uni. of Cambridge, Uni. of Montreal / Mila, Bennett Institute for Public Policy at the Uni. of Cambridge.

“The ILINIA program is dedicated to providing an outstanding platform for Africans to learn and work on questions around maximizing wellbeing and responding to global catastrophic risks” according to the organization’s homepage.

*As for the AI Governance Institute, I believe that should be the Centre for the Governance of AI at Oxford University since the associated academic is Robert F. Trager from the University of Oxford.

As the months (years?) fly by, I guess we’ll find out if this hardware approach gains any traction where AI regulation is concerned.

Socially responsible AI—it’s time says University of Manchester (UK) researchers

A May 10, 2018 news item on ScienceDaily describes a report on the ‘fourth industrial revolution’ being released by the University of Manchester,

The development of new Artificial Intelligence (AI) technology is often subject to bias, and the resulting systems can be discriminatory, meaning more should be done by policymakers to ensure its development is democratic and socially responsible.

This is according to Dr Barbara Ribeiro of Manchester Institute of Innovation Research at The University of Manchester, in On AI and Robotics: Developing policy for the Fourth Industrial Revolution, a new policy report on the role of AI and Robotics in society, being published today [May 10, 2018].

Interestingly, the US White House is hosting a summit on AI today, May 10, 2018, according to a May 8, 2018 article by Danny Crichton for TechCrunch (Note: Links have been removed),

Now, it appears the White House itself is getting involved in bringing together key American stakeholders to discuss AI and those opportunities and challenges. …

Among the confirmed guests are Facebook’s Jerome Pesenti, Amazon’s Rohit Prasad, and Intel’s CEO Brian Krzanich. While the event has many tech companies present, a total of 38 companies are expected to be in attendance including United Airlines and Ford.

AI policy has been top-of-mind for many policymakers around the world. French President Emmanuel Macron has announced a comprehensive national AI strategy, as has Canada, which has put together a research fund and a set of programs to attempt to build on the success of notable local AI researchers such as University of Toronto professor George Hinton, who is a major figure in deep learning.

But it is China that has increasingly drawn the attention and concern of U.S. policymakers. The country and its venture capitalists are outlaying billions of dollars to invest in the AI industry, and it has made leading in artificial intelligence one of the nation’s top priorities through its Made in China 2025 program and other reports. …

In comparison, the United States has been remarkably uncoordinated when it comes to AI. …

That lack of engagement from policymakers has been fine — after all, the United States is the world leader in AI research. But with other nations pouring resources and talent into the space, DC policymakers are worried that the U.S. could suddenly find itself behind the frontier of research in the space, with particular repercussions for the defense industry.

Interesting contrast: do we take time to consider the implications or do we engage in a race?

While it’s becoming fashionable to dismiss dichotomous questions of this nature, the two approaches (competition and reflection) are not that compatible and it does seem to be an either/or proposition.

A May 10, 2018 University of Manchester press release (also on EurekAlert), which originated the news item, expands on the theme of responsibility and AI,

Dr Ribeiro adds because investment into AI will essentially be paid for by tax-payers in the long-term, policymakers need to make sure that the benefits of such technologies are fairly distributed throughout society.

She says: “Ensuring social justice in AI development is essential. AI technologies rely on big data and the use of algorithms, which influence decision-making in public life and on matters such as social welfare, public safety and urban planning.”

“In these ‘data-driven’ decision-making processes some social groups may be excluded, either because they lack access to devices necessary to participate or because the selected datasets do not consider the needs, preferences and interests of marginalised and disadvantaged people.”

On AI and Robotics: Developing policy for the Fourth Industrial Revolution is a comprehensive report written, developed and published by Policy@Manchester with leading experts and academics from across the University.

The publication is designed to help employers, regulators and policymakers understand the potential effects of AI in areas such as industry, healthcare, research and international policy.

However, the report doesn’t just focus on AI. It also looks at robotics, explaining the differences and similarities between the two separate areas of research and development (R&D) and the challenges policymakers face with each.

Professor Anna Scaife, Co-Director of the University’s Policy@Manchester team, explains: “Although the challenges that companies and policymakers are facing with respect to AI and robotic systems are similar in many ways, these are two entirely separate technologies – something which is often misunderstood, not just by the general public, but policymakers and employers too. This is something that has to be addressed.”

One particular area the report highlights where robotics can have a positive impact is in the world of hazardous working environments, such a nuclear decommissioning and clean-up.

Professor Barry Lennox, Professor of Applied Control and Head of the UOM Robotics Group, adds: “The transfer of robotics technology into industry, and in particular the nuclear industry, requires cultural and societal changes as well as technological advances.

“It is really important that regulators are aware of what robotic technology is and is not capable of doing today, as well as understanding what the technology might be capable of doing over the next -5 years.”

The report also highlights the importance of big data and AI in healthcare, for example in the fight against antimicrobial resistance (AMR).

Lord Jim O’Neill, Honorary Professor of Economics at The University of Manchester and Chair of the Review on Antimicrobial Resistance explains: “An important example of this is the international effort to limit the spread of antimicrobial resistance (AMR). The AMR Review gave 27 specific recommendations covering 10 broad areas, which became known as the ‘10 Commandments’.

“All 10 are necessary, and none are sufficient on their own, but if there is one that I find myself increasingly believing is a permanent game-changer, it is state of the art diagnostics. We need a ‘Google for doctors’ to reduce the rate of over prescription.”

The versatile nature of AI and robotics is leading many experts to predict that the technologies will have a significant impact on a wide variety of fields in the coming years. Policy@Manchester hopes that the On AI and Robotics report will contribute to helping policymakers, industry stakeholders and regulators better understand the range of issues they will face as the technologies play ever greater roles in our everyday lives.

As far as I can tell, the report has been designed for online viewing only. There are none of the markers (imprint date, publisher, etc.) that I expect to see on a print document. There is no bibliography or list of references but there are links to outside sources throughout the document.

It’s an interesting approach to publishing a report that calls for social justice, especially since the issue of ‘trust’ is increasingly being emphasized where all AI is concerned. With regard to this report, I’m not sure I can trust it. With a print document or a PDF I have markers. I can examine the index, the bibliography, etc. and determine if this material has covered the subject area with reference to well known authorities. It’s much harder to do that with this report. As well, this ‘souped up’ document also looks like it might be easy to change something without my knowledge. With a print or PDF version, I can compare the documents but not with this one.

US White House’s grand computing challenge could mean a boost for research into artificial intelligence and brains

An Oct. 20, 2015 posting by Lynn Bergeson on Nanotechnology Now announces a US White House challenge incorporating nanotechnology, computing, and brain research (Note: A link has been removed),

On October 20, 2015, the White House announced a grand challenge to develop transformational computing capabilities by combining innovations in multiple scientific disciplines. See https://www.whitehouse.gov/blog/2015/10/15/nanotechnology-inspired-grand-challenge-future-computing The Office of Science and Technology Policy (OSTP) states that, after considering over 100 responses to its June 17, 2015, request for information, it “is excited to announce the following grand challenge that addresses three Administration priorities — the National Nanotechnology Initiative, the National Strategic Computing Initiative (NSCI), and the BRAIN initiative.” The grand challenge is to “[c]reate a new type of computer that can proactively interpret and learn from data, solve unfamiliar problems using what it has learned, and operate with the energy efficiency of the human brain.”

Here’s where the Oct. 20, 2015 posting, which originated the news item, by Lloyd Whitman, Randy Bryant, and Tom Kalil for the US White House blog gets interesting,

 While it continues to be a national priority to advance conventional digital computing—which has been the engine of the information technology revolution—current technology falls far short of the human brain in terms of both the brain’s sensing and problem-solving abilities and its low power consumption. Many experts predict that fundamental physical limitations will prevent transistor technology from ever matching these twin characteristics. We are therefore challenging the nanotechnology and computer science communities to look beyond the decades-old approach to computing based on the Von Neumann architecture as implemented with transistor-based processors, and chart a new path that will continue the rapid pace of innovation beyond the next decade.

There are growing problems facing the Nation that the new computing capabilities envisioned in this challenge might address, from delivering individualized treatments for disease, to allowing advanced robots to work safely alongside people, to proactively identifying and blocking cyber intrusions. To meet this challenge, major breakthroughs are needed not only in the basic devices that store and process information and the amount of energy they require, but in the way a computer analyzes images, sounds, and patterns; interprets and learns from data; and identifies and solves problems. [emphases mine]

Many of these breakthroughs will require new kinds of nanoscale devices and materials integrated into three-dimensional systems and may take a decade or more to achieve. These nanotechnology innovations will have to be developed in close coordination with new computer architectures, and will likely be informed by our growing understanding of the brain—a remarkable, fault-tolerant system that consumes less power than an incandescent light bulb.

Recent progress in developing novel, low-power methods of sensing and computation—including neuromorphic, magneto-electronic, and analog systems—combined with dramatic advances in neuroscience and cognitive sciences, lead us to believe that this ambitious challenge is now within our reach. …

This is the first time I’ve come across anything that publicly links the BRAIN initiative to computing, artificial intelligence, and artificial brains. (For my own sake, I make an arbitrary distinction between algorithms [artificial intelligence] and devices that simulate neural plasticity [artificial brains].)The emphasis in the past has always been on new strategies for dealing with Parkinson’s and other neurological diseases and conditions.