Tag Archives: Quebec

AI is an energy/water hog. Where is all the power coming from? plus UN defines new “era of global water bankruptcy”

I’ve touched on the topic of AI (artificial intelligence) and water consumption before, notably in my October 16, 2023 posting “The cost of building ChatGPT” where most of the focus is on the US. I now have some Canadian stories but first, there’s the United Nations University (UNU).

Global water bankruptcy

From a January 20, 2026 United Nations University press release (also on EurekAlert), Note 1: In the front pages, there’s this unexpected link to Canada : “UNU-INWEH [United Nations University Institute for Water, Environment and Health] gratefully acknowledges its host, the Government of Canada, and ongoing financial support from Global Affairs Canada.” Note 2: Links have been removed,

Amid chronic groundwater depletion, water overallocation, land and soil degradation, deforestation, and pollution, all compounded by global heating, a UN report today declared the dawn of an era of global water bankruptcy, inviting world leaders to facilitate “honest, science-based adaptation to a new reality.”

“Global Water Bankruptcy: Living Beyond Our Hydrological Means in the Post-Crisis Era,” argues that the familiar terms “water stressed” and “water crisis” fail to reflect today’s reality in many places: a post-crisis condition marked by irreversible losses of natural water capital and an inability to bounce back to historic baselines.

“This report tells an uncomfortable truth: many regions are living beyond their hydrological means, and many critical water systems are already bankrupt,” says lead author Kaveh Madani, Director of the UN University’s Institute for Water, Environment and Health (UNU-INWEH), known as ‘The UN’s Think Tank on Water.’

Expressed in financial terms, the report says many societies have not only overspent their annual renewable water “income” from rivers, soils, and snowpack, they have depleted long-term “savings” in aquifers, glaciers, wetlands, and other natural reservoirs.

This has resulted in a growing list of compacted aquifers, subsided land in deltas and coastal cities, vanished lakes and wetlands, and irreversibly lost biodiversity.

The UNU report is based on a peer-reviewed paper in the journal of Water Resources Management that formally defines water bankruptcy as

1) persistent over-withdrawal from surface and groundwater relative to renewable inflows and safe levels of depletion; and

2) the resulting irreversible or prohibitively costly loss of water-related natural capital.

By contrast:

  • “Water stress” reflects high pressure that remains reversible
  • “Water crisis” describes acute shocks that can be overcome

The report is issued prior to a high-level meeting in Dakar, Senegal (26–27 Jan.) to prepare the 2026 UN Water Conference, to be co-hosted by the United Arab Emirates and Senegal 2-4 Dec. in the UAE. 

While not every basin and country is water-bankrupt, Madani says, “enough critical systems around the world have crossed these thresholds. These systems are interconnected through trade, migration, climate feedbacks, and geopolitical dependencies, so the global risk landscape is now fundamentally altered.”

Madani underlines the following four essential points:

  • Water cannot be protected if we allow the hydrological cycle, the climate, and the underlying natural capital that produces water to be interrupted or damaged. The world has an important and still largely untapped strategic opportunity to act.
  • Water is an issue that crosses traditional political boundaries. It belongs to north and south, and to left and right. For that reason, it can serve as a bridge to create trust and unity between and within nations. In the fragmented world we live in, water can become a powerful focus for cooperation and for aligning national security with international priorities.
  • Investment in water is also investment in mitigating climate change, biodiversity loss, and desertification. Water should not be treated only as a downstream sector affected by other environmental crises. On the contrary, targeted investment in water can address the immediate concerns of communities and nations while also advancing the objectives of the Rio Conventions (climate, biodiversity, desertification).
  • A renewed global emphasis on water could help reaccelerate stalled negotiations and potentially reenergize halted international processes. A practical and cooperative focus on water offers a way to connect urgent local needs with long-term global goals.

Hotspots

In the Middle East and North Africa region, high water stress, climate vulnerability, low agricultural productivity, energy-intensive desalination, and sand and dust storms intersect with complex political economies;

In parts of South Asia, groundwater-dependent agriculture and urbanization have produced chronic declines in water tables and local subsidence; and

In the American Southwest, the Colorado River and its reservoirs have become symbols of over-promised water.

A world in the red

Drawing on global datasets and recent scientific evidence, the report presents a stark statistical overview of trends, the overwhelming majority caused by humans:

50%: Large lakes worldwide that have lost water since the early 1990s (with 25% of humanity directly dependent on those lakes)

50%: Global domestic water now derived from groundwater

40%+: Irrigation water drawn from aquifers being steadily drained

70%: Major aquifers showing long-term decline

410 million hectares: Area ofnatural wetlands – almost equal in size to the entire European Union – erased in the past five decades

30%+: Global glacier mass lost since 1970, with entire low- and mid-latitude mountain ranges expected to lose functional glaciers altogether within decades

Dozens: Major rivers that now fail to reach the sea for parts of the year

50+ years: How long many river basins and aquifers have been overdrawing their accounts

100 million hectares: Cropland damaged by salinization alone

And the human consequences:

75%: Humanity in countries classified as water-insecure or critically water-insecure

2 billion: People living on sinking ground.

25 cm: Annual drop being experienced by some cities

4 billion: People facing severe water scarcity at least one month every year

170 million hectares: Irrigated cropland under high or very high water stress – equivalent to the areas of France, Spain, Germany, and Italy combined

US$5.1 trillion: Annual value of lost wetland ecosystem services

3 billion: People living in areas where total water storage is declining or unstable, with 50%+ of global food produced in those same stressed regions.

1.8 billion: People living under drought conditions in 2022–2023

US$307 billion: Current annual global cost of drought

2.2 billion: People who lack safely managed drinking water, while 3.5 billion lack safely managed sanitation

Says Madani: “Millions of farmers are trying to grow more food from shrinking, polluted, or disappearing water sources. Without rapid transitions toward water-smart agriculture, water bankruptcy will spread rapidly.”

A new diagnosis for a new era

A region can be flooded one year and still be water bankrupt, he adds, if long-term withdrawals exceed replenishment. In that sense, water bankruptcy is not about how wet or dry a place looks, but about balance, accounting, and sustainability.

Says Madani: As with global climate change or pandemics, a declaration of global water bankruptcy does not imply uniform impact everywhere, but that enough systems across regions and income levels have become insolvent and crossed irreversible thresholds to constitute a planetary-scale condition.

“Water bankruptcy is also global because its consequences travel,” Madani explains. “Agriculture accounts for the vast majority of freshwater use, and food systems are tightly interconnected through trade and prices. When water scarcity undermines farming in one region, the effects ripple through global markets, political stability, and food security elsewhere. This makes water bankruptcy not a series of isolated local crises, but a shared global risk that demands a new type of response: Bankruptcy management,  not crisis management.”

A call to reset the global water agenda

The report warns that the current global water agenda – largely focused on drinking water, sanitation, and incremental efficiency improvements – is no longer fit for purpose in many places and calls for a new global water agenda that:

  • Formally recognizes the state of water bankruptcy
  • Recognizes water as both a constraint and an opportunity for meeting climate, biodiversity, and land commitments
  • Elevates water issues in climate, biodiversity, and desertification negotiations, development finance, and peacebuilding processes.
  • Embeds water-bankruptcy monitoring in global frameworks, using Earth observation, AI, and integrated modelling
  • Uses water as a catalyst to accelerate cooperation between the UN Member States

In practical terms, managing water bankruptcy requires governments to focus on the following priorities:

  • Prevent further irreversible damage such as wetland loss, destructive groundwater depletion, and uncontrolled pollution
  • Rebalance rights, claims, and expectations to match degraded carrying capacity
  • Support just transitions for communities whose livelihoods must change
  • Transform water-intensive sectors, including agriculture and industry, through crop shifts, irrigation reforms, and more efficient urban systems
  • Build institutions for continuous adaptation, with monitoring systems linked to threshold-based management

The report underlines that water bankruptcy is not merely a hydrological problem, but a justice issue with deep social and political implications requiring attention at the highest levels of government and multilateral cooperation. The burdens fall disproportionately on smallholder farmers, Indigenous Peoples, low-income urban residents, women and youth while the benefits of overuse often accrued to more powerful actors.

“Water bankruptcy is becoming a driver of fragility, displacement, and conflict,” says UN Under-Secretary-General Tshilidzi Marwala, Rector of UNU. “Managing it fairly – ensuring that vulnerable communities are protected and that unavoidable losses are shared equitably – is now central to maintaining peace, stability, and social cohesion.”

“Bankruptcy management requires honesty, courage, and political will,” Madani adds. “We cannot rebuild vanished glaciers or reinflate acutely compacted aquifers. But we can prevent further loss of our remaining natural capital, and redesign institutions to live within new hydrological limits.”

Upcoming milestones —  the 2026 and 2028 UN Water Conferences, the end of the Water Action Decade in 2028, and the 2030 SDG deadline, for example — provide critical opportunities to implement this shift, he says.

“Despite its warnings, the report is not a statement of hopelessness,” adds Madani. “It is a call for honesty, realism, and transformation.  Declaring bankruptcy is not about giving up — it is about starting fresh. By acknowledging the reality of water bankruptcy, we can finally make the hard choices that will protect people, economies, and ecosystems. The longer we delay, the deeper the deficit grows.”

Here’s a link to and a citation for the report,

Global Water Bankruptcy: Living Beyond Our Hydrological Means in the Post-Crisis Era (or the PDF) by Kaveh Madani. Contributors: Mir Matin, Aria Farsi, Luying Wang, Amir AghaKouchak, Mohammed Azhar, Jenna Elshurafa, Sogol Jafarzadeh, Tafadzwanashe Mabhaudhi, Ali Mirchi, Abraham Nunbogu, Mojtaba Sadegh, Robert Sandford, Manoochehr Shirzaei, William Smyth, Hossein Tabari, MJ Tourian, Farshid Vahedifard. 2026, University Institute for Water, Environment and Health (UNU-INWEH), Richmond Hill, Ontario, Canada, DOI: 10.53328/INR26KAM001

AI data centre building spree in Canada (special emphasis: British Columbia [BC])

An October 18, 2025 article (with embedded videos) by Jonathan Montpetit and Yvette Brend with files from Tara Carman on Canadian Broadcasting Corporation’s (CBC) news online website,

On a dry, hot day this summer, Kathryn Barnwell, a retired English professor, marched up the road from her home in Nanaimo, B.C [British Columbia]., to take another crack at the mayor.

Leonard Krog, a longtime friend of Barnwell’s, was standing by the entrance to a parched wooded lot, the proposed site for a data centre Krog has been backing. 

“I really, really enjoin you to think about what this [data centre] could mean for your political career,” Barnwell said, barely looking him in the eye.

Krog, who has been mayor since 2018, sees the project as a chance to modernize the city’s economy.

“The kind of jobs that would be attracted to this kind of facility are the jobs of the future,” he said.

Until three years ago, Barnwell knew little about data centres, which house the computer servers that power much of the online world. But when the plot of land near her home was rezoned for one, she began researching. She’s now one of the loudest opponents of the project in Nanaimo.

Her main concern, shared by other local opponents, is the amount of municipal drinking water the 200,000-square foot data centre would need for its cooling system. In a region beset by drought, Barnwell says similar-sized facilities can churn through 70,000 litres of potable water a day.

“Life on this planet is sustained by water. It is not sustained by data. We don’t need data the way we need water,” Barnwell said. “And we in Canada have been pretty blithe about our natural resources.”

Barnwell sees herself as part of a global resistance movement drawing attention to the environmental impact of data centres, at a moment when the tech industry is spending dizzying sums to build them. 

It’s not just BC according to the October 18, 2025 article, Note: A link has been removed,

Canada is poised to join the data centre boom. The federal government, and some provinces, have been actively courting investors, vaunting the country’s cheap electricity (much of it hydro power) and cool climate. 

At least eight projects are underway to build hyperscale data centres in Canada, according to the federal government. But as such projects face greater scrutiny around the world, Canada is jumping into the AI construction race with few mechanisms to protect its water supply.

“There’s barely any regulation in place,” said Geoff White, executive director of the Public Interest Advocacy Centre, an Ottawa-based consumer protection group.

Microsoft builds out in Canada

Among the big tech companies, Microsoft has taken the lead in building data centres with AI capacity in Canada. The Washington-based corporation purchased seven large tracts of land in 2021, including a golf course near Quebec City and a former department store in the Toronto suburb of Etobicoke. 

It’s in the process of turning the sites into data centres capable of powering its AI-enabled products like Azure and Copilot, an investment worth at least $1 billion.

At least two of the Microsoft data centres in Ontario have been cleared by municipal authorities to consume vast amounts of municipal drinking water.

The Etobicoke data centre, dubbed YTO 40, was approved to use up to 39.75 litres of water per second for cooling purposes, according to planning documents submitted to the city. That would be the equivalent of around 1.2 billion litres a year, or 500 Olympic-sized swimming pools. 

A Microsoft data centre complex in nearby Vaughan, a city spokesperson said, is expected to consume 730 million litres of water annually. 

But according to Microsoft, its new Canadian data centres will only use a fraction of that amount, because of design features that allow them to be cooled using outdoor air and recycled rainwater. 

Alistair Speirs, general manager of Microsoft’s Azure global infrastructure, acknowledges traditional industrial cooling has “been a very water-intensive process.” He says the way Microsoft is building its data centres today is “with really that in mind.” 

“One of the great things about building in Canada, and in colder climates, is that we can just use free air cooling from outside air temperatures.”

The company said its data centres will only draw municipal water when outside temperatures are above 29.4 C or when indoor humidity levels drop below five per cent. 

Microsoft has made similar promises elsewhere. The company built a data centre in the northwestern Netherlands despite opposition from local farmers, promising it would only need between 12 and 20 million litres of water annually. 

Dutch media later revealed the data centre was consuming more than four times that — as locals were being asked to limit their own water use. 

In its response to Dutch media, Microsoft said the initial estimate had been based on “consumption at that time,” but did not specify what time period it was referring to.

Growing concerns and protests but not so much in Canada, from the October 18, 2025 article, Note: A link has been removed,

The new Microsoft data centres in Canada, which are slated to come online in the coming months, have faced no discernible opposition from the public. One Etobicoke city councillor wasn’t even aware of the YTO 40 project before CBC News contacted him.

That’s in stark contrast to communities in the United States, Europe and Latin America, where concerns about water scarcity have sparked protests.

Last month, Google shelved plans to build a $1-billion US data centre in Indianapolis, Ind., after residents organized a months-long campaign against the project. When a lawyer representing Google abruptly announced the decision at a city council meeting, the room erupted in applause that lasted for nearly a minute.

A growing number of jurisdictions in the U.S. and Europe are also seeking to pass regulations that would limit data centre water consumption or force companies to be more transparent about how much they’re using. 

Canada’s federal government has set aside $700 million to fund data centre projects here. But aside from energy regulators, who review data centre applications to connect to power grids, there is little industry oversight.

“If we’re racing ahead and thinking only about the economic benefits, and not thinking about the downstream impacts to our environment, that’s negligent,” said White with the Public Interest Advocacy Centre. “I think Canadians ought to be concerned. Our water is highly sought after, and will be as the world gets hotter.”

Elsewhere in the October 18, 2025 article, there’s information about water use in data centres,

How much water do chatbots drink?

Data centres are as old as computers and until recently were relatively uncontroversial — boring bits of IT infrastructure tucked away in non-descript office spaces.

But with the advent of cloud computing in the mid-2000s, they dramatically increased in size. 

These data centres — buildings ranging anywhere from 10,000 to 100,000 square feet — required upwards of 100 megawatts of power and millions of litres of water annually for their cooling systems.

These demands have only been turbocharged by artificial intelligence, which requires data centres that house thousands of densely packed high-performance chips, operating around the clock — and generating heat.

A study done in 2023 estimated that generating between 10 and 50 medium-sized responses in ChatGPT — the AI-powered chatbot — consumed about 500 millilitres of water. That accounts for both the water required to produce the electricity needed to run the data centre (435 millilitres) and cool it down (the remaining 65 millilitres).

A separate study, conducted by the International Energy Agency, estimated that in 2023, data centres around the world consumed around 140 billion litres of water just for cooling. 

Much of that was potable water pulled from municipal utilities. (Because data centres generally use evaporative cooling systems, untreated water can damage the sensitive computer equipment inside.)

If you have time, the October 18, 2025 article is worth reading in its entirety.

This October 15, 2025 article by Amanda Follett Hosgood for The Tyee is focused on BC’s approach to AI water consumption, Note: Links have been removed,

B.C. recently saw its first AI data centres open in Prince George and Kamloops, and more are on the way. AI centres have been touted as a way to grow the economy while ensuring data sovereignty by storing information within our borders.

“AI is everywhere. It’s changing how we work. It’s changing how we learn. It’s changing how we do business,” said Port Moody-Burquitlam MLA Rick Glumac, who this summer became B.C.’s minister of state for artificial intelligence and new technologies. The position comes with a mandate to expand B.C.’s AI sector.

“There’s a lot of good work ahead,” Glumac, who comes from a tech background [emphases mine], told The Tyee.

But there’s a hitch. AI is just one of various potential boom industries vying for a piece of B.C.’s limited electricity supply.

AI data centres are energy intensive, requiring immense amounts of electricity for power and cooling. B.C.’s hydroelectric grid, which is fed almost entirely by renewable sources, offers a clean — but limited — energy source that’s attractive to businesses seeking to market themselves as environmentally conscious.

As the province looks to green the existing economy, transition to electric vehicles and expand industries like LNG using cleaner energy, AI is fast becoming one more customer seeking a piece of the power pie.

Glumac’s technical background? From the Rick Glumac Wikipedia entry, Note: Links have been removed,

Glumac worked much of his career in the field of computer graphics as a software developer, visual effects artist, and computer graphics supervisor.[8]  He worked on the first computer-animated TV show ReBoot, and later worked for companies such as DreamWorks and Electronic Arts on well-known Hollywood films such as Shrek 2, Madagascar, and Over the Hedge.[8] Following this he developed apps for the iPhone.[7]

That’s a bit of leap for Mr. Glumac. Developing computer graphics is not the same thing as shepherding new and emerging technologies through government regulations and creating new regulations, deaing with public hopes/fears, anticipating energy needs, and dealing with any unintended consequences of the technologies themselves.

Follett Hosgood’s October 15, 2025 article provides an overview of the energy and data centre situation in BC,

B.C. is the first province in Canada to create a cabinet position dedicated to AI. But the province isn’t alone in signalling its interest in the industry.

The federal government created its own minister of artificial intelligence and digital innovation following the spring election, tapping former broadcaster Evan Solomon for the position.

B.C.’s parallel cabinet position “gives us the opportunity to really put a focus on this and to partner with the federal government,” Glumac told The Tyee.

In an email, B.C.’s Ministry of Energy and Climate Solutions said the province groups data centres into three categories: conventional data centres, cryptocurrency mining and AI data centres.

It added that there are currently 12 “notable” conventional data centres in the province and three more requesting a power connection. If approved, the combined operations would draw nearly 40 megawatts of power — a small slice of the province’s 12,000-megawatt power supply.

AI data centres, however, can each draw more than 100 megawatts of power.

Two of Canada’s largest telecommunications companies recently announced plans to open AI data centres in B.C.

In May [2025], Bell Canada said it would open an AI “data centre supercluster” that is expected to use upwards of 500 megawatts, or about five per cent of the province’s current power supply.

Its first AI data centre, a seven-megawatt facility in Kamloops, opened in June. A second seven-megawatt facility is slated to open in Merritt by the end of next year.

The company is planning two additional 26-megawatt data centres in the near future, one in partnership with Thompson Rivers University and the other with the Upper Nicola Band. It says another two data centres with a combined capacity of more than 400 megawatts are in “advanced planning stages.”

Bell declined to provide detailed timelines, confirming in an email only that its Kamloops site is currently operational. “We remain on track and more sites will open in the coming months,” a spokesperson wrote.

The company also faces competition.

In April [2025], Telus announced two Canadian data centres, one in B.C., touting the operations as “fully owned, operated and secured on Canadian soil by a Canadian company” — a nod to national concerns over data sovereignty.

The Kamloops operation will be “powered by 99 per cent renewable energy,” Telus said, but how much power it will draw is unclear. The company didn’t respond to The Tyee’s questions about capacity or when it might come online.

Asked about how these data centres will fit into B.C.’s power grid, Glumac said that “BC Hydro is monitoring this very closely and planning accordingly.” The industry is evolving quickly, he added, and he wouldn’t rule out the possibility that the province would need to regulate expansion as it did with cryptocurrency mining.

“We want to make sure that clean energy supports not just data centres but supports the people in British Columbia and supports economic opportunities and job opportunities,” Glumac said. “It’s very important to monitor that and to balance all of that, and BC Hydro is doing that.”

BC Hydro directed The Tyee’s questions to B.C.’s Energy Ministry, which also provided an emailed statement.

“BC Hydro continues to look at how the growth in the industry could impact future demand, and will adjust its forecasts and planning accordingly,” a ministry spokesperson wrote, adding that the province is committed to “balancing energy demand with economic priorities.”

“We recognize that the AI industry is evolving rapidly, and we are closely monitoring how advancements in AI infrastructure may impact future energy needs.”

AI data centres don’t have to be a problem

It’s not all doom and gloom, from Follett Hosgood’s October 15, 2025 article,

Last year, the province [BC] imported a quarter of its electricity needs, most of it from the United States and Alberta, where it was generated using fossil fuels. In both 2024 and earlier this year, BC Hydro put out calls for power in an effort to make up the shortfall with clean, locally produced power.

Kate Harland is the research lead for clean growth at the Canadian Climate Institute. In an interview with The Tyee, she said that now is the time for governments to plan for the expected spike in energy demand from AI data centres.

“There is a lot of interest right now across Canada in having AI-enabled data centres,” Harland said. But she added that there’s likely to be a “tipping point” where AI’s benefits might not outweigh its demands on the power grid.

“If suddenly data centres are 20 per cent or 30 per cent of your total electricity demand, then you get into a new territory of questions,” she said.

While provinces such as B.C. and Quebec have traditionally taken a “first come, first served” approach to industrial power requests, some jurisdictions are implementing new rules to ensure limited power supply is allocated fairly and for the greatest overall benefit, Harland said.

Last year [2024], Quebec began requiring any projects requesting more than five megawatts of power to get ministerial approval. The approval considers factors such as economic impact, social impact and power requirements.

In 2023, Quebec’s government also signed an agreement with Microsoft as it launched four new data centres in the province. The tech giant agreed to reduce its energy consumption by 30 per cent during times of peak power use.

Harland said the pressure to meet power demand could be approached as an opportunity to build out renewables and increase supply. If data centres become more efficient over time, that would free up renewable power for domestic uses like electric vehicles and heat pumps, she said.

AI is also credited with identifying efficiencies, including in power use, which could help to offset its draw on the grid, Harland said. (Glumac also pointed to a recent study indicating that it could drive $200 billion in productivity improvements nationally.)

The technology’s practical uses tend to set it apart from cryptocurrency in the discussion about which industries get priority to grid access, Harland said.

The potential for data sovereignty is another argument in its favour.

But Harland emphasized that now is the time for governments to be proactive in forming AI policies.

If you have the time, do read Follett Hosgood’s October 15, 2025 article in its entirety.

If you have even more time, I provided some detail about the federal government and its new Minister of AI Digital Innovation in an October 17, 2025 posting (scroll down to the Canada and its Minister of AI and Digital Innovation subhead for information about Evan Solomon, the new minister. If you continue further in the posting.

What about local governments?

Municipalities may also have a role to play as data centres become more important in their real estate markets as this January 31, 2026 article by Kenneth Chan for the Daily Hive could be said to hint at, Note: Links have been removed,

Westbank’s major downtown Vancouver office tower project at steam plant site pivots to hotel, residential, and data centre uses

One of downtown Vancouver’s largest office development projects, first planned during the pre-pandemic office market boom, will not proceed as originally approved [emphasis mine], given the prevailing weak office market conditions.

Instead, the office tower project previously approved for 150 West Georgia St. (formerly addressed as 720 Beatty St.) — situated at the southwest corner of Beatty Street and West Georgia Street, immediately adjacent to BC Place Stadium’s northeast corner — is now in the very early stages of being repositioned as a mixed-use hotel and residential tower with a data centre [emphases mine], based on an all-new architectural design concept that also adds density and height.

A number of preliminary conceptual artistic renderings also show this drastic pivot.

All of this will be integrated into the district utility company Creative Energy’s new on-site replacement and expanded steam plant facilities, which have incurred major cost increases and experienced delays, including factors related to local developer Westbank’s liquidity challenges.

Pivot to a new tower with hotel, residential, and data centre uses on top of the Creative Energy facility

In October 2020, Vancouver City Council approved Westbank’s original rezoning application for redeveloping this site into an office tower and a standalone entertainment pavilion building, with below-grade parking and a new replacement steam plant.

Moving forward, essentially everything below grade — including the new vehicle parking and the Creative Energy facility — as well as the new entertainment pavilion building, will remain unchanged, while the office tower project above grade will not proceed.

Instead, the previous 264-ft.-tall, 17-storey, bulky, S-shaped office tower concept — designed by Bjarke Ingels Group and HCMA — with 583,000 sq. ft. of office space and 12,000 sq. ft. of additional ground-level retail/restaurant space has been completely scrapped and is now envisioned to become a 450-ft-tall, 48-storey, mixed-use hotel and residential tower with a data centre and ground-level retail/restaurant space, for a total of roughly 700,000 sq. ft. of building floor area.

The significantly increased height for added density is made possible by City Council’s July 2023-approved sweeping city-wide changes [emphasis mine] to the protected mountain view cones. Design revisions for taller heights are also set to occur for the nearby future Plaza of Nations and Concord Landing projects, made possible by these view cone changes.

… a Westbank spokesperson previously confirmed to Daily Hive Urbanized that they are looking into adding major data centre uses [emphasis mine] to the 1977-built, six-storey office building at 111 East 5th Ave. This distinctive brick building — part of Westbank’s Main Alley tech campus of new and renovated office buildings in the vicinity of the intersection of Main Street and East 5th Avenue in Mount Pleasant — is perhaps best known for being one of Hootsuite’s office locations since 2014. Westbank noted that at this time, Hootsuite is still the building’s primary tenant.

How will these and future data centres affect Vancouverites’ energy needs and access to water?.Hopefully, someone in Vancouver’s city government is doing some thinking on these matters.

Rémi Quirion has an opinion about US-Canada science and about science diplomacy

Rémi Quirion is chief scientist of the province of Québec, Canada, chief executive officer of Fonds de recherche du Québec (FRQ), and president of the International Network for Governmental Science Advice (INGSA), Auckland, New Zealand. His March 13, 2025 editorial about science, collaboration, and US-Canada relations in light of Mr. Donald Trump’s constant assaults against Canadian sovereignty was published in the American Association for the Advancement of Science (AAAS) Science magazine, Note: A link has been removed,

A partnership can be demanding, and as with any couple, can have good days and bad. The United States–Canada relationship is most definitely having a bad one. It’s difficult to fully comprehend all the dimensions of the current threats to one of the world’s strongest, longest, and multifaceted alliances. From contemptuous musings on annexation to a tariff war that could wreak economic havoc on both sides of the border, the insults and aggravations are stoking uncertainty about a relationship that has flourished for decades. …

The number one partner for Canadian science is by far the United States. For the past 5 years, 27% of all Canadian scientific publications were coauthored with American colleagues (according to a Canadian bibliometric database and the Web of Science). And the reverse is true as well. Canadian scientists are prominent international partners of American scientists in published research. Long-standing major programs between the two countries include joint research projects on the Great Lakes, the Arctic, space, health (including global public health), climate monitoring, artificial intelligence (AI), subatomic physics, and data sharing. Despite the uncertainty around tariffs, active partnerships have recently been reconfirmed and even extended between federal funding organizations in both countries. These include interactions between the US National Science Foundation and the Natural Sciences and Engineering Research Council of Canada as well as Canada’s Social Science and Humanities Research Council. Such efforts are also strong at the regional level. For instance, research between Massachusetts and Québec focuses on climate change, biotechnology, and transportation, an alliance rooted in enduring cultural links.

… For decades, graduate students in Canada have continued training in the United States as postdoctoral fellows, and some have chosen to stay and forge fruitful collaborations with scientists in Canada. … American fellows coming to Canada to pursue their studies are not as numerous but are particularly interested in AI, quantum computing, clean energy, and environmental studies as well as the life sciences. Considering the current situation, it may be tempting for Canada to use the opportunity to lure both younger and well-established Canadian scientists back to Canada. Indeed, Canada is already receiving inquiries in that regard. …

On both sides of the border, additional collaboration should focus on building capacity to advise elected officials and high-level policy-makers on scientific issues. Going further, the International Network for Governmental Science Advice (INGSA) and its 130 member countries, of which I am chair, aim to take on this challenge globally with three chapters in the Global South (Kuala Lumpur, Malaysia; Buenos Aires, Argentina; and Port Louis, Mauritius) as well as new European (Oxford, United Kingdom) and North American (Montreal, Canada) chapters that will be inaugurated over the next 2 years. A major objective is to increase the ability to offer advice not only at the national level but also to subregional and local officials who often must make critical decisions under emergency conditions.

Strengthening science diplomacy is more urgent than ever in North America and around the world. The American Association for the Advancement of Science (AAAS, the publisher of Science) and the United Kingdom’s Royal Society have just released an updated framework on this topic as did the European Commission. In Québec, the Fonds de recherche du Québec launched a program this year to create new chairs in science diplomacy that will cultivate a network of experts across scientific disciplines throughout the province. The intent is to leverage the network to establish strong international science and policy partnerships.

Canada now has a new prime minister in place, and with the stability of US-Canada relations at stake, scientific partnerships should be upheld by the leaders of both nations. …

Here’s a link and a citation,

Uphold US-Canada science by Rémi Quirion. Science 13 Mar 2025 Vol 387, Issue 6739 p. 1127 DOI: 10.1126/science.adx2966

This editorial appears to be open access.

US science no longer no. 1

Not mentioned in Quirion’s editorial is the anxiety that the American scientific community appears to be suffering from. The days when US science led the world have either come to an end or will shortly depending on what opinion piece you’re reading. What’s not in question is that the days when US science dominated the world scene are over as this January 21, 2022 article by Jeffrey Mervis for the AAAS’s Science Insider makes clear,

A new data-rich report by the National Science Foundation (NSF) confirms China has overtaken the United States as the world’s leader in several key scientific metrics, including the overall number of papers published and patents awarded. U.S. scientists also have serious competition from foreign researchers in certain fields, it finds.

That loss of hegemony raises an important question for U.S. policymakers and the country’s research community, according to NSF’s oversight body, the National Science Board (NSB). “Since across-the-board leadership in [science and engineering] is no longer a possibility, what then should our goals be?” NSB asks in a policy brief that accompanies this year’s Science and Engineering Indicators, NSF’s biennial assessment of global research, which was released this week. (NSF has converted a single gargantuan volume into nine thematic reports, summarized in The State of U.S. Science and Engineering 2022.)

“It would be the height of hubris to think that [the United States] would lead in everything,” Phillips [Julia Phillips, an applied physicist who chairs the NSB committee that oversees Indicators] says. “So, I think the most important thing is for the United States to decide where it cannot be No. 2.”

At the top of her priorities is sustaining the federal government’s financial support of fundamental science. “If we lead in basic research, then we’re still in a really good position,” she says. But the government’s “record over the last decades does not give me a lot of cause for hope.” For example, Phillips says she is not optimistic that Congress will approve pending legislation that envisions a much larger NSF over the next 5 years, or a 2022 appropriations bill that would give NSF a lot more money right away.

Falling behind

[Note: The graphic which illustrates the statistics more clearly has not been reproduced here.]

The United States trailed China in contributing to the growth in global research spending over the past 2 decades. China 29% United States 23% South Korea& Japan 9% Other Asia 7% Other 14% European Union 17% Contribution to global R&D growth (Graphic) K. Franklin/Science; (Data) The State of U.S. Science and Engineering 2022/National Science Foundation

Canadians certainly. know a thing or two about not being no. 1 and maybe we could offer some advice on how to deal with that reality.

In the meantime, the US looks more and more frantic as it attempts to come to terms with its new status both scientifically and in every other way.

Canadian research into nanomaterial workplace exposure in the air and on surfaces

An August 30, 2018 news item on Nanowerk announces the report,

The monitoring of air contamination by engineered nanomaterials (ENM) is a complex process with many uncertainties and limitations owing to the presence of particles of nanometric size that are not ENMs, the lack of validated instruments for breathing zone measurements and the many indicators to be considered.

In addition, some organizations, France’s Institut national de recherche et de sécurité (INRS) and Québec’s Institut de recherche Robert-Sauvé en santé et en sécurité du travail (IRSST) among them, stress the need to also sample surfaces for ENM deposits.

In other words, to get a better picture of the risks of worker exposure, we need to fine-tune the existing methods of sampling and characterizing ENMs and develop new one. Accordingly, the main goal of this project was to develop innovative methodological approaches for detailed qualitative as well as quantitative characterization of workplace exposure to ENMs.

A PDF of the 88-page report is available in English or in French.

An August 30, 2018 (?) abstract of the IRSST report titled An Assessment of Methods of Sampling and Characterizing Engineered Nanomaterials in the Air and on Surfaces in the Workplace (2nd edition) by Maximilien Debia, Gilles L’Espérance, Cyril Catto, Philippe Plamondon, André Dufresne, Claude Ostiguy, which originated the news item, outlines what you can expect from the report,

This research project has two complementary parts: a laboratory investigation and a fieldwork component. The laboratory investigation involved generating titanium dioxide (TiO2) nanoparticles under controlled laboratory conditions and studying different sampling and analysis devices. The fieldwork comprised a series of nine interventions adapted to different workplaces and designed to test a variety of sampling devices and analytical procedures and to measure ENM exposure levels among Québec workers.

The methods for characterizing aerosols and surface deposits that were investigated include: i) measurement by direct-reading instruments (DRI), such as condensation particle counters (CPC), optical particle counters (OPC), laser photometers, aerodynamic diameter spectrometers and electric mobility spectrometer; ii) transmission electron microscopy (TEM) or scanning transmission electron microscopy (STEM) with a variety of sampling devices, including the Mini Particle Sampler® (MPS); iii) measurement of elemental carbon (EC); iv) inductively coupled plasma mass spectrometry (ICP-MS) and (v) Raman spectroscopy.

The workplace investigations covered a variety of industries (e.g., electronics, manufacturing, printing, construction, energy, research and development) and included producers as well as users or integrators of ENMs. In the workplaces investigated, we found nanometals or metal oxides (TiO2, SiO2, zinc oxides, lithium iron phosphate, titanate, copper oxides), nanoclays, nanocellulose and carbonaceous materials, including carbon nanofibers (CNF) and carbon nanotubes (CNT)—single-walled (SWCNT) as well as multiwalled (MWCNT).

The project helped to advance our knowledge of workplace assessments of ENMs by documenting specific tasks and industrial processes (e.g., printing and varnishing) as well as certain as yet little investigated ENMs (nanocellulose, for example).

Based on our investigations, we propose a strategy for more accurate assessment of ENM exposure using methods that require a minimum of preanalytical handling. The recommended strategy is a systematic two-step assessment of workplaces that produce and use ENMs. The first step involves testing with different DRIs (such as a CPC and a laser photometer) as well as sample collection and subsequent microscopic analysis (MPS + TEM/STEM) to clearly identify the work tasks that generate ENMs. The second step, once work exposure is confirmed, is specific quantification of the ENMs detected. The following findings are particularly helpful for detailed characterization of ENM exposure:

  1. The first conclusive tests of a technique using ICP-MS to quantify the metal oxide content of samples collected in the workplace
  2. The possibility of combining different sampling methods recommended by the National Institute for Occupational Safety and Health (NIOSH) to measure elemental carbon as an indicator of NTC/NFC, as well as demonstration of the limitation of this method stemming from observed interference with the black carbon particles required to synthesis carbon materials (for example, Raman spectroscopy showed that less than 6% of the particles deposited on the electron microscopy grid at one site were SWCNTs)
  3. The clear advantages of using an MPS (instead of the standard 37-mm cassettes used as sampling media for electron microscopy), which allows quantification of materials
  4. The major impact of sampling time: a long sampling time overloads electron microscopy grids and can lead to overestimation of average particle agglomerate size and underestimation of particle concentrations
  5. The feasibility and utility of surface sampling, either with sampling pumps or passively by diffusion onto the electron microscopy grids, to assess ENM dispersion in the workplace

These original findings suggest promising avenues for assessing ENM exposure, while also showing their limitations. Improvements to our sampling and analysis methods give us a better understanding of ENM exposure and help in adapting and implementing control measures that can minimize occupational exposure.

You can download the full report in either or both English and French from the ‘Nanomaterials – A Guide to Good Practices Facilitating Risk Management in the Workplace, 2nd Edition‘ webpage.

Clean up oil spills (on water and/or land) with oil-eating bacterium

Quebec’s Institut national de la recherche scientifique (INRS) announced an environmentally friendly way of cleaning up oil spills in an April 9, 2018 news item on ScienceDaily,

From pipelines to tankers, oil spills and their impact on the environment are a source of concern. These disasters occur on a regular basis, leading to messy decontamination challenges that require massive investments of time and resources. But however widespread and serious the damage may be, the solution could be microscopic — Alcanivorax borkumensis — a bacterium that feeds on hydrocarbons. Professor Satinder Kaur Brar and her team at INRS have conducted laboratory tests that show the effectiveness of enzymes produced by the bacterium in degrading petroleum products in soil and water. Their results offer hope for a simple, effective, and eco-friendly method of decontaminating water and soil at oil sites.

An April 8, 2018 INRS news release by Stephanie Thibaut, which originated the news item, expands on the theme,

In recent years, researchers have sequenced the genomes of thousands of bacteria from various sources. Research associate Dr.Tarek Rouissi poured over “technical data sheets” for many bacterial strains with the aim of finding the perfect candidate for a dirty job: cleaning up oil spills. He focused on the enzymes they produce and the conditions in which they evolve.

A. borkumensis, a non-pathogenic marine bacterium piqued his curiosity. The microorganism’s genome contains the codes of a number of interesting enzymes and it is classified as “hydrocarbonoclastic”—i.e., as a bacterium that uses hydrocarbons as a source of energy. A. borkumensis is present in all oceans and drifts with the current, multiplying rapidly in areas where the concentration of oil compounds is high, which partly explains the natural degradation observed after some spills. But its remedial potential had not been assessed.

“I had a hunch,” Rouissi said, “and the characterization of the enzymes produced by the bacterium seems to have proven me right!” A. borkumensis boasts an impressive set of tools: during its evolution, it has accumulated a range of very specific enzymes that degrade almost everything found in oil. Among these enzymes, the bacteria’shydroxylases stand out from the ones found in other species: they are far more effective, in addition to being more versatile and resistant to chemical conditions, as tested in coordination by a Ph.D. student, Ms. Tayssir Kadri.

To test the microscopic cleaner, the research team purified a few of the enzymes and used them to treat samples of contaminated soil. “The degradation of hydrocarbons using the crude enzyme extract is really encouraging and reached over 80% for various compounds,” said Brar. The process is effective in removing benzene, toluene, and xylene, and has been tested under a number of different conditions to show that it is a powerful way to clean up polluted land and marine environments.”

The next steps for Brar’s team are to find out more about how these bacteria metabolize hydrocarbons and explore their potential for decontaminating sites. One of the advantages of the approach developed at INRS is its application in difficult-to-access environments, which present a major challenge during oil spill cleanup efforts.

Here’s a link to and a citation for the paper,

Ex-situ biodegradation of petroleum hydrocarbons using Alcanivorax borkumensis enzymes by Tayssir Kadri, Sara Magdouli, Tarek Rouissi, Satinder Kaur Brar. Biochemical Engineering Journal Volume 132, 15 April 2018, Pages 279-287 DOI: https://doi.org/10.1016/j.bej.2018.01.014

This paper is behind a paywall.

In light of this research, it seems remiss not to mention the recent setback for Canada’s Trans Mountain pipeline expansion. Canada’s Federal Court of Appeal quashed the approval as per this August 30, 2018 news item on canadanews.org. There were two reasons for the quashing (1) a failure to properly consult with indigenous people and (2) a failure to adequately assess environmental impacts on marine life. Interestingly, no one ever mentions environmental cleanups and remediation, which could be very important if my current suspicions regarding the outcome for the next federal election are correct.

Regardless of which party forms the Canadian government after the 2019 federal election, I believe that either Liberals or Conservatives would be equally dedicated to bringing this pipeline to the West Coast. The only possibility I can see of a change lies in a potential minority government is formed by a coalition including the NDP (New Democratic Party) and/or the Green Party; an outcome that seems improbable at this juncture.

Given what I believe to be the political will regarding the Trans Mountain pipeline, I would dearly love to see more support for better cleanup and remediation measures.

The Hedy Lamarr of international research: Canada’s Third assessment of The State of Science and Technology and Industrial Research and Development in Canada (2 of 2)

Taking up from where I left off with my comments on Competing in a Global Innovation Economy: The Current State of R and D in Canada or as I prefer to call it the Third assessment of Canadas S&T (science and technology) and R&D (research and development). (Part 1 for anyone who missed it).

Is it possible to get past Hedy?

Interestingly (to me anyway), one of our R&D strengths, the visual and performing arts, features sectors where a preponderance of people are dedicated to creating culture in Canada and don’t spend a lot of time trying to make money so they can retire before the age of 40 as so many of our start-up founders do. (Retiring before the age of 40 just reminded me of Hollywood actresses {Hedy] who found and still do find that work was/is hard to come by after that age. You may be able but I’m not sure I can get past Hedy.) Perhaps our business people (start-up founders) could take a leaf out of the visual and performing arts handbook? Or, not. There is another question.

Does it matter if we continue to be a ‘branch plant’ economy? Somebody once posed that question to me when I was grumbling that our start-ups never led to larger businesses and acted more like incubators (which could describe our R&D as well),. He noted that Canadians have a pretty good standard of living and we’ve been running things this way for over a century and it seems to work for us. Is it that bad? I didn’t have an  answer for him then and I don’t have one now but I think it’s a useful question to ask and no one on this (2018) expert panel or the previous expert panel (2013) seems to have asked.

I appreciate that the panel was constrained by the questions given by the government but given how they snuck in a few items that technically speaking were not part of their remit, I’m thinking they might have gone just a bit further. The problem with answering the questions as asked is that if you’ve got the wrong questions, your answers will be garbage (GIGO; garbage in, garbage out) or, as is said, where science is concerned, it’s the quality of your questions.

On that note, I would have liked to know more about the survey of top-cited researchers. I think looking at the questions could have been quite illuminating and I would have liked some information on from where (geographically and area of specialization) they got most of their answers. In keeping with past practice (2012 assessment published in 2013), there is no additional information offered about the survey questions or results. Still, there was this (from the report released April 10, 2018; Note: There may be some difference between the formatting seen here and that seen in the document),

3.1.2 International Perceptions of Canadian Research
As with the 2012 S&T report, the CCA commissioned a survey of top-cited researchers’ perceptions of Canada’s research strength in their field or subfield relative to that of other countries (Section 1.3.2). Researchers were asked to identify the top five countries in their field and subfield of expertise: 36% of respondents (compared with 37% in the 2012 survey) from across all fields of research rated Canada in the top five countries in their field (Figure B.1 and Table B.1 in the appendix). Canada ranks fourth out of all countries, behind the United States, United Kingdom, and Germany, and ahead of France. This represents a change of about 1 percentage point from the overall results of the 2012 S&T survey. There was a 4 percentage point decrease in how often France is ranked among the top five countries; the ordering of the top five countries, however, remains the same.

When asked to rate Canada’s research strength among other advanced countries in their field of expertise, 72% (4,005) of respondents rated Canadian research as “strong” (corresponding to a score of 5 or higher on a 7-point scale) compared with 68% in the 2012 S&T survey (Table 3.4). [pp. 40-41 Print; pp. 78-70 PDF]

Before I forget, there was mention of the international research scene,

Growth in research output, as estimated by number of publications, varies considerably for the 20 top countries. Brazil, China, India, Iran, and South Korea have had the most significant increases in publication output over the last 10 years. [emphases mine] In particular, the dramatic increase in China’s output means that it is closing the gap with the United States. In 2014, China’s output was 95% of that of the United States, compared with 26% in 2003. [emphasis mine]

Table 3.2 shows the Growth Index (GI), a measure of the rate at which the research output for a given country changed between 2003 and 2014, normalized by the world growth rate. If a country’s growth in research output is higher than the world average, the GI score is greater than 1.0. For example, between 2003 and 2014, China’s GI score was 1.50 (i.e., 50% greater than the world average) compared with 0.88 and 0.80 for Canada and the United States, respectively. Note that the dramatic increase in publication production of emerging economies such as China and India has had a negative impact on Canada’s rank and GI score (see CCA, 2016).

As long as I’ve been blogging (10 years), the international research community (in particular the US) has been looking over its shoulder at China.

Patents and intellectual property

As an inventor, Hedy got more than one patent. Much has been made of the fact that  despite an agreement, the US Navy did not pay her or her partner (George Antheil) for work that would lead to significant military use (apparently, it was instrumental in the Bay of Pigs incident, for those familiar with that bit of history), GPS, WiFi, Bluetooth, and more.

Some comments about patents. They are meant to encourage more innovation by ensuring that creators/inventors get paid for their efforts .This is true for a set time period and when it’s over, other people get access and can innovate further. It’s not intended to be a lifelong (or inheritable) source of income. The issue in Lamarr’s case is that the navy developed the technology during the patent’s term without telling either her or her partner so, of course, they didn’t need to compensate them despite the original agreement. They really should have paid her and Antheil.

The current patent situation, particularly in the US, is vastly different from the original vision. These days patents are often used as weapons designed to halt innovation. One item that should be noted is that the Canadian federal budget indirectly addressed their misuse (from my March 16, 2018 posting),

Surprisingly, no one else seems to have mentioned a new (?) intellectual property strategy introduced in the document (from Chapter 2: Progress; scroll down about 80% of the way, Note: The formatting has been changed),

Budget 2018 proposes measures in support of a new Intellectual Property Strategy to help Canadian entrepreneurs better understand and protect intellectual property, and get better access to shared intellectual property.

What Is a Patent Collective?
A Patent Collective is a way for firms to share, generate, and license or purchase intellectual property. The collective approach is intended to help Canadian firms ensure a global “freedom to operate”, mitigate the risk of infringing a patent, and aid in the defence of a patent infringement suit.

Budget 2018 proposes to invest $85.3 million over five years, starting in 2018–19, with $10 million per year ongoing, in support of the strategy. The Minister of Innovation, Science and Economic Development will bring forward the full details of the strategy in the coming months, including the following initiatives to increase the intellectual property literacy of Canadian entrepreneurs, and to reduce costs and create incentives for Canadian businesses to leverage their intellectual property:

  • To better enable firms to access and share intellectual property, the Government proposes to provide $30 million in 2019–20 to pilot a Patent Collective. This collective will work with Canada’s entrepreneurs to pool patents, so that small and medium-sized firms have better access to the critical intellectual property they need to grow their businesses.
  • To support the development of intellectual property expertise and legal advice for Canada’s innovation community, the Government proposes to provide $21.5 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada. This funding will improve access for Canadian entrepreneurs to intellectual property legal clinics at universities. It will also enable the creation of a team in the federal government to work with Canadian entrepreneurs to help them develop tailored strategies for using their intellectual property and expanding into international markets.
  • To support strategic intellectual property tools that enable economic growth, Budget 2018 also proposes to provide $33.8 million over five years, starting in 2018–19, to Innovation, Science and Economic Development Canada, including $4.5 million for the creation of an intellectual property marketplace. This marketplace will be a one-stop, online listing of public sector-owned intellectual property available for licensing or sale to reduce transaction costs for businesses and researchers, and to improve Canadian entrepreneurs’ access to public sector-owned intellectual property.

The Government will also consider further measures, including through legislation, in support of the new intellectual property strategy.

Helping All Canadians Harness Intellectual Property
Intellectual property is one of our most valuable resources, and every Canadian business owner should understand how to protect and use it.

To better understand what groups of Canadians are benefiting the most from intellectual property, Budget 2018 proposes to provide Statistics Canada with $2 million over three years to conduct an intellectual property awareness and use survey. This survey will help identify how Canadians understand and use intellectual property, including groups that have traditionally been less likely to use intellectual property, such as women and Indigenous entrepreneurs. The results of the survey should help the Government better meet the needs of these groups through education and awareness initiatives.

The Canadian Intellectual Property Office will also increase the number of education and awareness initiatives that are delivered in partnership with business, intermediaries and academia to ensure Canadians better understand, integrate and take advantage of intellectual property when building their business strategies. This will include targeted initiatives to support underrepresented groups.

Finally, Budget 2018 also proposes to invest $1 million over five years to enable representatives of Canada’s Indigenous Peoples to participate in discussions at the World Intellectual Property Organization related to traditional knowledge and traditional cultural expressions, an important form of intellectual property.

It’s not wholly clear what they mean by ‘intellectual property’. The focus seems to be on  patents as they are the only intellectual property (as opposed to copyright and trademarks) singled out in the budget. As for how the ‘patent collective’ is going to meet all its objectives, this budget supplies no clarity on the matter. On the plus side, I’m glad to see that indigenous peoples’ knowledge is being acknowledged as “an important form of intellectual property” and I hope the discussions at the World Intellectual Property Organization are fruitful.

As for the patent situation in Canada (from the report released April 10, 2018),

Over the past decade, the Canadian patent flow in all technical sectors has consistently decreased. Patent flow provides a partial picture of how patents in Canada are exploited. A negative flow represents a deficit of patented inventions owned by Canadian assignees versus the number of patented inventions created by Canadian inventors. The patent flow for all Canadian patents decreased from about −0.04 in 2003 to −0.26 in 2014 (Figure 4.7). This means that there is an overall deficit of 26% of patent ownership in Canada. In other words, fewer patents were owned by Canadian institutions than were invented in Canada.

This is a significant change from 2003 when the deficit was only 4%. The drop is consistent across all technical sectors in the past 10 years, with Mechanical Engineering falling the least, and Electrical Engineering the most (Figure 4.7). At the technical field level, the patent flow dropped significantly in Digital Communication and Telecommunications. For example, the Digital Communication patent flow fell from 0.6 in 2003 to −0.2 in 2014. This fall could be partially linked to Nortel’s US$4.5 billion patent sale [emphasis mine] to the Rockstar consortium (which included Apple, BlackBerry, Ericsson, Microsoft, and Sony) (Brickley, 2011). Food Chemistry and Microstructural [?] and Nanotechnology both also showed a significant drop in patent flow. [p. 83 Print; p. 121 PDF]

Despite a fall in the number of parents for ‘Digital Communication’, we’re still doing well according to statistics elsewhere in this report. Is it possible that patents aren’t that big a deal? Of course, it’s also possible that we are enjoying the benefits of past work and will miss out on future work. (Note: A video of the April 10, 2018 report presentation by Max Blouw features him saying something like that.)

One last note, Nortel died many years ago. Disconcertingly, this report, despite more than one reference to Nortel, never mentions the company’s demise.

Boxed text

While the expert panel wasn’t tasked to answer certain types of questions, as I’ve noted earlier they managed to sneak in a few items.  One of the strategies they used was putting special inserts into text boxes including this (from the report released April 10, 2018),

Box 4.2
The FinTech Revolution

Financial services is a key industry in Canada. In 2015, the industry accounted for 4.4%

of Canadia jobs and about 7% of Canadian GDP (Burt, 2016). Toronto is the second largest financial services hub in North America and one of the most vibrant research hubs in FinTech. Since 2010, more than 100 start-up companies have been founded in Canada, attracting more than $1 billion in investment (Moffatt, 2016). In 2016 alone, venture-backed investment in Canadian financial technology companies grew by 35% to $137.7 million (Ho, 2017). The Toronto Financial Services Alliance estimates that there are approximately 40,000 ICT specialists working in financial services in Toronto alone.

AI, blockchain, [emphasis mine] and other results of ICT research provide the basis for several transformative FinTech innovations including, for example, decentralized transaction ledgers, cryptocurrencies (e.g., bitcoin), and AI-based risk assessment and fraud detection. These innovations offer opportunities to develop new markets for established financial services firms, but also provide entry points for technology firms to develop competing service offerings, increasing competition in the financial services industry. In response, many financial services companies are increasing their investments in FinTech companies (Breznitz et al., 2015). By their own account, the big five banks invest more than $1 billion annually in R&D of advanced software solutions, including AI-based innovations (J. Thompson, personal communication, 2016). The banks are also increasingly investing in university research and collaboration with start-up companies. For instance, together with several large insurance and financial management firms, all big five banks have invested in the Vector Institute for Artificial Intelligence (Kolm, 2017).

I’m glad to see the mention of blockchain while AI (artificial intelligence) is an area where we have innovated (from the report released April 10, 2018),

AI has attracted researchers and funding since the 1960s; however, there were periods of stagnation in the 1970s and 1980s, sometimes referred to as the “AI winter.” During this period, the Canadian Institute for Advanced Research (CIFAR), under the direction of Fraser Mustard, started supporting AI research with a decade-long program called Artificial Intelligence, Robotics and Society, [emphasis mine] which was active from 1983 to 1994. In 2004, a new program called Neural Computation and Adaptive Perception was initiated and renewed twice in 2008 and 2014 under the title, Learning in Machines and Brains. Through these programs, the government provided long-term, predictable support for high- risk research that propelled Canadian researchers to the forefront of global AI development. In the 1990s and early 2000s, Canadian research output and impact on AI were second only to that of the United States (CIFAR, 2016). NSERC has also been an early supporter of AI. According to its searchable grant database, NSERC has given funding to research projects on AI since at least 1991–1992 (the earliest searchable year) (NSERC, 2017a).

The University of Toronto, the University of Alberta, and the Université de Montréal have emerged as international centres for research in neural networks and deep learning, with leading experts such as Geoffrey Hinton and Yoshua Bengio. Recently, these locations have expanded into vibrant hubs for research in AI applications with a diverse mix of specialized research institutes, accelerators, and start-up companies, and growing investment by major international players in AI development, such as Microsoft, Google, and Facebook. Many highly influential AI researchers today are either from Canada or have at some point in their careers worked at a Canadian institution or with Canadian scholars.

As international opportunities in AI research and the ICT industry have grown, many of Canada’s AI pioneers have been drawn to research institutions and companies outside of Canada. According to the OECD, Canada’s share of patents in AI declined from 2.4% in 2000 to 2005 to 2% in 2010 to 2015. Although Canada is the sixth largest producer of top-cited scientific publications related to machine learning, firms headquartered in Canada accounted for only 0.9% of all AI-related inventions from 2012 to 2014 (OECD, 2017c). Canadian AI researchers, however, remain involved in the core nodes of an expanding international network of AI researchers, most of whom continue to maintain ties with their home institutions. Compared with their international peers, Canadian AI researchers are engaged in international collaborations far more often than would be expected by Canada’s level of research output, with Canada ranking fifth in collaboration. [p. 97-98 Print; p. 135-136 PDF]

The only mention of robotics seems to be here in this section and it’s only in passing. This is a bit surprising given its global importance. I wonder if robotics has been somehow hidden inside the term artificial intelligence, although sometimes it’s vice versa with robot being used to describe artificial intelligence. I’m noticing this trend of assuming the terms are synonymous or interchangeable not just in Canadian publications but elsewhere too.  ’nuff said.

Getting back to the matter at hand, t he report does note that patenting (technometric data) is problematic (from the report released April 10, 2018),

The limitations of technometric data stem largely from their restricted applicability across areas of R&D. Patenting, as a strategy for IP management, is similarly limited in not being equally relevant across industries. Trends in patenting can also reflect commercial pressures unrelated to R&D activities, such as defensive or strategic patenting practices. Finally, taxonomies for assessing patents are not aligned with bibliometric taxonomies, though links can be drawn to research publications through the analysis of patent citations. [p. 105 Print; p. 143 PDF]

It’s interesting to me that they make reference to many of the same issues that I mention but they seem to forget and don’t use that information in their conclusions.

There is one other piece of boxed text I want to highlight (from the report released April 10, 2018),

Box 6.3
Open Science: An Emerging Approach to Create New Linkages

Open Science is an umbrella term to describe collaborative and open approaches to
undertaking science, which can be powerful catalysts of innovation. This includes
the development of open collaborative networks among research performers, such
as the private sector, and the wider distribution of research that usually results when
restrictions on use are removed. Such an approach triggers faster translation of ideas
among research partners and moves the boundaries of pre-competitive research to
later, applied stages of research. With research results freely accessible, companies
can focus on developing new products and processes that can be commercialized.

Two Canadian organizations exemplify the development of such models. In June
2017, Genome Canada, the Ontario government, and pharmaceutical companies
invested $33 million in the Structural Genomics Consortium (SGC) (Genome Canada,
2017). Formed in 2004, the SGC is at the forefront of the Canadian open science
movement and has contributed to many key research advancements towards new
treatments (SGC, 2018). McGill University’s Montréal Neurological Institute and
Hospital has also embraced the principles of open science. Since 2016, it has been
sharing its research results with the scientific community without restriction, with
the objective of expanding “the impact of brain research and accelerat[ing] the
discovery of ground-breaking therapies to treat patients suffering from a wide range
of devastating neurological diseases” (neuro, n.d.).

This is exciting stuff and I’m happy the panel featured it. (I wrote about the Montréal Neurological Institute initiative in a Jan. 22, 2016 posting.)

More than once, the report notes the difficulties with using bibliometric and technometric data as measures of scientific achievement and progress and open science (along with its cousins, open data and open access) are contributing to the difficulties as James Somers notes in his April 5, 2018 article ‘The Scientific Paper is Obsolete’ for The Atlantic (Note: Links have been removed),

The scientific paper—the actual form of it—was one of the enabling inventions of modernity. Before it was developed in the 1600s, results were communicated privately in letters, ephemerally in lectures, or all at once in books. There was no public forum for incremental advances. By making room for reports of single experiments or minor technical advances, journals made the chaos of science accretive. Scientists from that point forward became like the social insects: They made their progress steadily, as a buzzing mass.

The earliest papers were in some ways more readable than papers are today. They were less specialized, more direct, shorter, and far less formal. Calculus had only just been invented. Entire data sets could fit in a table on a single page. What little “computation” contributed to the results was done by hand and could be verified in the same way.

The more sophisticated science becomes, the harder it is to communicate results. Papers today are longer than ever and full of jargon and symbols. They depend on chains of computer programs that generate data, and clean up data, and plot data, and run statistical models on data. These programs tend to be both so sloppily written and so central to the results that it’s [sic] contributed to a replication crisis, or put another way, a failure of the paper to perform its most basic task: to report what you’ve actually discovered, clearly enough that someone else can discover it for themselves.

Perhaps the paper itself is to blame. Scientific methods evolve now at the speed of software; the skill most in demand among physicists, biologists, chemists, geologists, even anthropologists and research psychologists, is facility with programming languages and “data science” packages. And yet the basic means of communicating scientific results hasn’t changed for 400 years. Papers may be posted online, but they’re still text and pictures on a page.

What would you get if you designed the scientific paper from scratch today? A little while ago I spoke to Bret Victor, a researcher who worked at Apple on early user-interface prototypes for the iPad and now runs his own lab in Oakland, California, that studies the future of computing. Victor has long been convinced that scientists haven’t yet taken full advantage of the computer. “It’s not that different than looking at the printing press, and the evolution of the book,” he said. After Gutenberg, the printing press was mostly used to mimic the calligraphy in bibles. It took nearly 100 years of technical and conceptual improvements to invent the modern book. “There was this entire period where they had the new technology of printing, but they were just using it to emulate the old media.”Victor gestured at what might be possible when he redesigned a journal article by Duncan Watts and Steven Strogatz, “Collective dynamics of ‘small-world’ networks.” He chose it both because it’s one of the most highly cited papers in all of science and because it’s a model of clear exposition. (Strogatz is best known for writing the beloved “Elements of Math” column for The New York Times.)

The Watts-Strogatz paper described its key findings the way most papers do, with text, pictures, and mathematical symbols. And like most papers, these findings were still hard to swallow, despite the lucid prose. The hardest parts were the ones that described procedures or algorithms, because these required the reader to “play computer” in their head, as Victor put it, that is, to strain to maintain a fragile mental picture of what was happening with each step of the algorithm.Victor’s redesign interleaved the explanatory text with little interactive diagrams that illustrated each step. In his version, you could see the algorithm at work on an example. You could even control it yourself….

For anyone interested in the evolution of how science is conducted and communicated, Somers’ article is a fascinating and in depth look at future possibilities.

Subregional R&D

I didn’t find this quite as compelling as the last time and that may be due to the fact that there’s less information and I think the 2012 report was the first to examine the Canadian R&D scene with a subregional (in their case, provinces) lens. On a high note, this report also covers cities (!) and regions, as well as, provinces.

Here’s the conclusion (from the report released April 10, 2018),

Ontario leads Canada in R&D investment and performance. The province accounts for almost half of R&D investment and personnel, research publications and collaborations, and patents. R&D activity in Ontario produces high-quality publications in each of Canada’s five R&D strengths, reflecting both the quantity and quality of universities in the province. Quebec lags Ontario in total investment, publications, and patents, but performs as well (citations) or better (R&D intensity) by some measures. Much like Ontario, Quebec researchers produce impactful publications across most of Canada’s five R&D strengths. Although it invests an amount similar to that of Alberta, British Columbia does so at a significantly higher intensity. British Columbia also produces more highly cited publications and patents, and is involved in more international research collaborations. R&D in British Columbia and Alberta clusters around Vancouver and Calgary in areas such as physics and ICT and in clinical medicine and energy, respectively. [emphasis mine] Smaller but vibrant R&D communities exist in the Prairies and Atlantic Canada [also referred to as the Maritime provinces or Maritimes] (and, to a lesser extent, in the Territories) in natural resource industries.

Globally, as urban populations expand exponentially, cities are likely to drive innovation and wealth creation at an increasing rate in the future. In Canada, R&D activity clusters around five large cities: Toronto, Montréal, Vancouver, Ottawa, and Calgary. These five cities create patents and high-tech companies at nearly twice the rate of other Canadian cities. They also account for half of clusters in the services sector, and many in advanced manufacturing.

Many clusters relate to natural resources and long-standing areas of economic and research strength. Natural resource clusters have emerged around the location of resources, such as forestry in British Columbia, oil and gas in Alberta, agriculture in Ontario, mining in Quebec, and maritime resources in Atlantic Canada. The automotive, plastics, and steel industries have the most individual clusters as a result of their economic success in Windsor, Hamilton, and Oshawa. Advanced manufacturing industries tend to be more concentrated, often located near specialized research universities. Strong connections between academia and industry are often associated with these clusters. R&D activity is distributed across the country, varying both between and within regions. It is critical to avoid drawing the wrong conclusion from this fact. This distribution does not imply the existence of a problem that needs to be remedied. Rather, it signals the benefits of diverse innovation systems, with differentiation driven by the needs of and resources available in each province. [pp.  132-133 Print; pp. 170-171 PDF]

Intriguingly, there’s no mention that in British Columbia (BC), there are leading areas of research: Visual & Performing Arts, Psychology & Cognitive Sciences, and Clinical Medicine (according to the table on p. 117 Print, p. 153 PDF).

As I said and hinted earlier, we’ve got brains; they’re just not the kind of brains that command respect.

Final comments

My hat’s off to the expert panel and staff of the Council of Canadian Academies. Combining two previous reports into one could not have been easy. As well, kudos to their attempts to broaden the discussion by mentioning initiative such as open science and for emphasizing the problems with bibliometrics, technometrics, and other measures. I have covered only parts of this assessment, (Competing in a Global Innovation Economy: The Current State of R&D in Canada), there’s a lot more to it including a substantive list of reference materials (bibliography).

While I have argued that perhaps the situation isn’t quite as bad as the headlines and statistics may suggest, there are some concerning trends for Canadians but we have to acknowledge that many countries have stepped up their research game and that’s good for all of us. You don’t get better at anything unless you work with and play with others who are better than you are. For example, both India and Italy surpassed us in numbers of published research papers. We slipped from 7th place to 9th. Thank you, Italy and India. (And, Happy ‘Italian Research in the World Day’ on April 15, 2018, the day’s inaugural year. In Italian: Piano Straordinario “Vivere all’Italiana” – Giornata della ricerca Italiana nel mondo.)

Unfortunately, the reading is harder going than previous R&D assessments in the CCA catalogue. And in the end, I can’t help thinking we’re just a little bit like Hedy Lamarr. Not really appreciated in all of our complexities although the expert panel and staff did try from time to time. Perhaps the government needs to find better ways of asking the questions.

***ETA April 12, 2018 at 1500 PDT: Talking about missing the obvious! I’ve been ranting on about how research strength in visual and performing arts and in philosophy and theology, etc. is perfectly fine and could lead to ‘traditional’ science breakthroughs without underlining the point by noting that Antheil was a musician, Lamarr was as an actress and they set the foundation for work by electrical engineers (or people with that specialty) for their signature work leading to WiFi, etc.***

There is, by the way, a Hedy-Canada connection. In 1998, she sued Canadian software company Corel, for its unauthorized use of her image on their Corel Draw 8 product packaging. She won.

More stuff

For those who’d like to see and hear the April 10, 2017 launch for “Competing in a Global Innovation Economy: The Current State of R&D in Canada” or the Third Assessment as I think of it, go here.

The report can be found here.

For anyone curious about ‘Bombshell: The Hedy Lamarr Story’ to be broadcast on May 18, 2018 as part of PBS’s American Masters series, there’s this trailer,

For the curious, I did find out more about the Hedy Lamarr and Corel Draw. John Lettice’s December 2, 1998 article The Rgister describes the suit and her subsequent victory in less than admiring terms,

Our picture doesn’t show glamorous actress Hedy Lamarr, who yesterday [Dec. 1, 1998] came to a settlement with Corel over the use of her image on Corel’s packaging. But we suppose that following the settlement we could have used a picture of Corel’s packaging. Lamarr sued Corel earlier this year over its use of a CorelDraw image of her. The picture had been produced by John Corkery, who was 1996 Best of Show winner of the Corel World Design Contest. Corel now seems to have come to an undisclosed settlement with her, which includes a five-year exclusive (oops — maybe we can’t use the pack-shot then) licence to use “the lifelike vector illustration of Hedy Lamarr on Corel’s graphic software packaging”. Lamarr, bless ‘er, says she’s looking forward to the continued success of Corel Corporation,  …

There’s this excerpt from a Sept. 21, 2015 posting (a pictorial essay of Lamarr’s life) by Shahebaz Khan on The Blaze Blog,

6. CorelDRAW:
For several years beginning in 1997, the boxes of Corel DRAW’s software suites were graced by a large Corel-drawn image of Lamarr. The picture won Corel DRAW’s yearly software suite cover design contest in 1996. Lamarr sued Corel for using the image without her permission. Corel countered that she did not own rights to the image. The parties reached an undisclosed settlement in 1998.

There’s also a Nov. 23, 1998 Corel Draw 8 product review by Mike Gorman on mymac.com, which includes a screenshot of the packaging that precipitated the lawsuit. Once they settled, it seems Corel used her image at least one more time.

Canada’s ‘Smart Cities’ will need new technology (5G wireless) and, maybe, graphene

I recently published [March 20, 2018] a piece on ‘smart cities’ both an art/science event in Toronto and a Canadian government initiative without mentioning the necessity of new technology to support all of the grand plans. On that note, it seems the Canadian federal government and two provincial (Québec and Ontario) governments are prepared to invest in one of the necessary ‘new’ technologies, 5G wireless. The Canadian Broadcasting Corporation’s (CBC) Shawn Benjamin reports about Canada’s 5G plans in suitably breathless (even in text only) tones of excitement in a March 19, 2018 article,

The federal, Ontario and Quebec governments say they will spend $200 million to help fund research into 5G wireless technology, the next-generation networks with download speeds 100 times faster than current ones can handle.

The so-called “5G corridor,” known as ENCQOR, will see tech companies such as Ericsson, Ciena Canada, Thales Canada, IBM and CGI kick in another $200 million to develop facilities to get the project up and running.

The idea is to set up a network of linked research facilities and laboratories that these companies — and as many as 1,000 more across Canada — will be able to use to test products and services that run on 5G networks.

Benjamin’s description of 5G is focused on what it will make possible in the future,

If you think things are moving too fast, buckle up, because a new 5G cellular network is just around the corner and it promises to transform our lives by connecting nearly everything to a new, much faster, reliable wireless network.

The first networks won’t be operational for at least a few years, but technology and telecom companies around the world are already planning to spend billions to make sure they aren’t left behind, says Lawrence Surtees, a communications analyst with the research firm IDC.

The new 5G is no tentative baby step toward the future. Rather, as Surtees puts it, “the move from 4G to 5G is a quantum leap.”

In a downtown Toronto soundstage, Alan Smithson recently demonstrated a few virtual reality and augmented reality projects that his company MetaVRse is working on.

The potential for VR and AR technology is endless, he said, in large part for its potential to help hurdle some of the walls we are already seeing with current networks.

Virtual Reality technology on the market today is continually increasing things like frame rates and screen resolutions in a constant quest to make their devices even more lifelike.

… They [current 4G networks] can’t handle the load. But 5G can do so easily, Smithson said, so much so that the current era of bulky augmented reality headsets could be replaced buy a pair of normal looking glasses.

In a 5G world, those internet-connected glasses will automatically recognize everyone you meet, and possibly be able to overlay their name in your field of vision, along with a link to their online profile. …

Benjamin also mentions ‘smart cities’,

In a University of Toronto laboratory, Professor Alberto Leon-Garcia researches connected vehicles and smart power grids. “My passion right now is enabling smart cities — making smart cities a reality — and that means having much more immediate and detailed sense of the environment,” he said.

Faster 5G networks will assist his projects in many ways, by giving planners more, instant data on things like traffic patterns, energy consumption, variou carbon footprints and much more.

Leon-Garcia points to a brightly lit map of Toronto [image embedded in Benjamin’s article] in his office, and explains that every dot of light represents a sensor transmitting real time data.

Currently, the network is hooked up to things like city buses, traffic cameras and the city-owned fleet of shared bicycles. He currently has thousands of data points feeding him info on his map, but in a 5G world, the network will support about a million sensors per square kilometre.

Very exciting but where is all this data going? What computers will be processing the information? Where are these sensors located? Benjamin does not venture into those waters nor does The Economist in a February 13, 2018 article about 5G, the Olympic Games in Pyeonchang, South Korea, but the magazine does note another barrier to 5G implementation,

“FASTER, higher, stronger,” goes the Olympic motto. So it is only appropriate that the next generation of wireless technology, “5G” for short, should get its first showcase at the Winter Olympics  under way in Pyeongchang, South Korea. Once fully developed, it is supposed to offer download speeds of at least 20 gigabits per second (4G manages about half that at best) and response times (“latency”) of below 1 millisecond. So the new networks will be able to transfer a high-resolution movie in two seconds and respond to requests in less than a hundredth of the time it takes to blink an eye. But 5G is not just about faster and swifter wireless connections.

The technology is meant to enable all sorts of new services. One such would offer virtual- or augmented-reality experiences. At the Olympics, for example, many contestants are being followed by 360-degree video cameras. At special venues sports fans can don virtual-reality goggles to put themselves right into the action. But 5G is also supposed to become the connective tissue for the internet of things, to link anything from smartphones to wireless sensors and industrial robots to self-driving cars. This will be made possible by a technique called “network slicing”, which allows operators quickly to create bespoke networks that give each set of devices exactly the connectivity they need.

Despite its versatility, it is not clear how quickly 5G will take off. The biggest brake will be economic. [emphasis mine] When the GSMA, an industry group, last year asked 750 telecoms bosses about the most salient impediment to delivering 5G, more than half cited the lack of a clear business case. People may want more bandwidth, but they are not willing to pay for it—an attitude even the lure of the fanciest virtual-reality applications may not change. …

That may not be the only brake, Dexter Johnson in a March 19, 2018 posting on his Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website), covers some of the others (Note: Links have been removed),

Graphene has been heralded as a “wonder material” for well over a decade now, and 5G has been marketed as the next big thing for at least the past five years. Analysts have suggested that 5G could be the golden ticket to virtual reality and artificial intelligence, and promised that graphene could improve technologies within electronics and optoelectronics.

But proponents of both graphene and 5G have also been accused of stirring up hype. There now seems to be a rising sense within industry circles that these glowing technological prospects will not come anytime soon.

At Mobile World Congress (MWC) in Barcelona last month [February 2018], some misgivings for these long promised technologies may have been put to rest, though, thanks in large part to each other.

In a meeting at MWC with Jari Kinaret, a professor at Chalmers University in Sweden and director of the Graphene Flagship, I took a guided tour around the Pavilion to see some of the technologies poised to have an impact on the development of 5G.

Being invited back to the MWC for three years is a pretty clear indication of how important graphene is to those who are trying to raise the fortunes of 5G. But just how important became more obvious to me in an interview with Frank Koppens, the leader of the quantum nano-optoelectronic group at Institute of Photonic Sciences (ICFO) just outside of Barcelona, last year.

He said: “5G cannot just scale. Some new technology is needed. And that’s why we have several companies in the Graphene Flagship that are putting a lot of pressure on us to address this issue.”

In a collaboration led by CNIT—a consortium of Italian universities and national laboratories focused on communication technologies—researchers from AMO GmbH, Ericsson, Nokia Bell Labs, and Imec have developed graphene-based photodetectors and modulators capable of receiving and transmitting optical data faster than ever before.

The aim of all this speed for transmitting data is to support the ultrafast data streams with extreme bandwidth that will be part of 5G. In fact, at another section during MWC, Ericsson was presenting the switching of a 100 Gigabits per second (Gbps) channel based on the technology.

“The fact that Ericsson is demonstrating another version of this technology demonstrates that from Ericsson’s point of view, this is no longer just research” said Kinaret.

It’s no mystery why the big mobile companies are jumping on this technology. Not only does it provide high-speed data transmission, but it also does it 10 times more efficiently than silicon or doped silicon devices, and will eventually do it more cheaply than those devices, according to Vito Sorianello, senior researcher at CNIT.

Interestingly, Ericsson is one of the tech companies mentioned with regard to Canada’s 5G project, ENCQOR and Sweden’s Chalmers University, as Dexter Johnson notes, is the lead institution for the Graphene Flagship.. One other fact to note, Canada’s resources include graphite mines with ‘premium’ flakes for producing graphene. Canada’s graphite mines are located (as far as I know) in only two Canadian provinces, Ontario and Québec, which also happen to be pitching money into ENCQOR. My March 21, 2018 posting describes the latest entry into the Canadian graphite mining stakes.

As for the questions I posed about processing power, etc. It seems the South Koreans have found answers of some kind but it’s hard to evaluate as I haven’t found any additional information about 5G and its implementation in South Korea. If anyone has answers, please feel free to leave them in the ‘comments’. Thank you.

Graphite ‘gold’ rush?

Someone in Germany (I think) is very excited about graphite, more specifically, there’s excitement around graphite flakes located in the province of Québec, Canada. Although, the person who wrote this news release might have wanted to run a search for ‘graphite’ and ‘gold rush’. The last graphite gold rush seems to have taken place in 2013.

Here’s the March 1, 2018 news release on PR Newswire (Cision), Note: Some links have been removed),

PALM BEACH, Florida, March 1, 2018 /PRNewswire/ —

MarketNewsUpdates.com News Commentary

Much like the gold rush in North America in the 1800s, people are going out in droves searching for a different kind of precious metal, graphite. The thing your third grade pencils were made of is now one of the hottest commodities on the market. This graphite is not being mined by your run-of-the-mill old-timey soot covered prospectors anymore. Big mining companies are all looking for this important resource integral to the production of lithium ion batteries due to the rise in popularity of electric cars. These players include Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), Teck Resources Limited (NYSE: TECK), Nemaska Lithium (TSX: NMX), Lithium Americas Corp. (TSX: LAC), and Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF).

These companies looking to manufacturer their graphite-based products, have seen steady positive growth over the past year. Their development of cutting-edge new products seems to be paying off. But in order to continue innovating, these companies need the graphite to do it. One junior miner looking to capitalize on the growing demand for this commodity is Graphite Energy Corp.

Graphite Energy is a mining company, that is focused on developing graphite resources. Graphite Energy’s state-of-the-art mining technology is friendly to the environment and has indicate graphite carbon (Cg) in the range of 2.20% to 22.30% with average 10.50% Cg from their Lac Aux Bouleaux Graphite Property in Southern Quebec [Canada].

Not Just Any Graphite Will Do

Graphite is one of the most in demand technology metals that is required for a green and sustainable world. Demand is only set to increase as the need for lithium ion batteries grows, fueled by the popularity of electric vehicles. However, not all graphite is created equal. The price of natural graphite has more than doubled since 2013 as companies look to maintain environmental standards which the use of synthetic graphite cannot provide due to its pollutant manufacturing process. Synthetic graphite is also very expensive to produce, deriving from petroleum and costing up to ten times as much as natural graphite. Therefore manufacturers are interested in increasing the proportion of natural graphite in their products in order to lower their costs.

High-grade large flake graphite is the solution to the environmental issues these companies are facing. But there is only so much supply to go around. Recent news by Graphite Energy Corp. on February 26th [2018] showed promising exploratory results. The announcement of the commencement of drilling is a positive step forward to meeting this increased demand.

Everything from batteries to solar panels need to be made with this natural high-grade flake graphite because what is the point of powering your home with the sun or charging your car if the products themselves do more harm than good to the environment when produced. However, supply consistency remains an issue since mines have different raw material impurities which vary from mine to mine. Certain types of battery technology already require graphite to be almost 100% pure. It is very possible that the purity requirements will increase in the future.

Natural graphite is also the basis of graphene, the uses of which seem limited only by scientists’ imaginations, given the host of new applications announced daily. In a recent study by ResearchSEA, a team from the Ocean University of China and Yunnan Normal University developed a highly efficient dye-sensitized solar cell using a graphene layer. This thin layer of graphene will allow solar panels to generate electricity when it rains.

Graphite Energy Is Keeping It Green

Whether it’s the graphite for the solar panels that will power the homes of tomorrow, or the lithium ion batteries that will fuel the latest cars, these advancements need to made in an environmentally conscious way. Mining companies like Graphite Energy Corp. specialize in the production of environmentally friendly graphite. The company will be producing its supply of natural graphite with the lowest environmental footprint possible.

From Saltwater To Clean Water Using Graphite

The world’s freshwater supply is at risk of running out. In order to mitigate this global disaster, worldwide spending on desalination technology was an estimated $16.6 billion in 2016. Due to the recent intense droughts in California, the state has accelerated the construction of desalination plants. However, the operating costs and the impact on the environment due to energy requirements for the process, is hindering any real progress in the space, until now.

Jeffrey Grossman, a professor at MIT’s [Massachusetts Institute of Technology, United States] Department of Materials Science and Engineering (DMSE), has been looking into whether graphite/graphene might reduce the cost of desalination.

“A billion people around the world lack regular access to clean water, and that’s expected to more than double in the next 25 years,” Grossman says. “Desalinated water costs five to 10 times more than regular municipal water, yet we’re not investing nearly enough money into research. If we don’t have clean energy we’re in serious trouble, but if we don’t have water we die.”

Grossman’s lab has demonstrated strong results showing that new filters made from graphene could greatly improve the energy efficiency of desalination plants while potentially reducing other costs as well.

Graphite/Graphene producers like Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE) are moving quickly to provide the materials necessary to develop this new generation of desalination plants.

Potential Comparables

Cruz Cobalt Corp. (TSX-V: CUZ) (OTC: BKTPF) Cruz Cobalt Corp. is cobalt mining company involved in the identification, acquisition and exploration of mineral properties. The company’s geographical segments include the United States and Canada. They are focused on acquiring and developing high-grade Cobalt projects in politically stable, environmentally responsible and ethical mining jurisdictions, essential for the rapidly growing rechargeable battery and renewable energy.

Nemaska Lithium (TSE: NMX.TO)

Nemaska Lithium is lithium mining company. The company is a supplier of lithium hydroxide and lithium carbonate to the emerging lithium battery market that is largely driven by electric vehicles. Nemaska mining operations are located in the mining friendly jurisdiction of Quebec, Canada. Nemaska Lithium has received a notice of allowance of a main patent application on its proprietary process to produce lithium hydroxide and lithium carbonate.

Lithium Americas Corp. (TSX: LAC.TO)

Lithium Americas is developing one of North America’s largest lithium deposits in northern Nevada. It operates nearly two lithium projects namely Cauchari-Olaroz project which is located in Argentina, and the Lithium Nevada project located in Nevada. The company manufactures specialty organoclay products, derived from clays, for sale to the oil and gas and other sectors.

Teck Resources Limited (NYSE: TECK)

Teck Resources Limited is a Canadian metals and mining company.Teck’s principal products include coal, copper, zinc, with secondary products including lead, silver, gold, molybdenum, germanium, indium and cadmium. Teck’s diverse resources focuses on providing products that are essential to building a better quality of life for people around the globe.

Graphite Mining Today For A Better Tomorrow

Graphite mining will forever be intertwined with the latest advancements in science and technology. Graphite deserves attention for its various use cases in automotive, energy, aerospace and robotics industries. In order for these and other industries to become sustainable and environmentally friendly, a reliance on graphite is necessary. Therefore, this rapidly growing sector has the potential to fuel investor interest in the mining space throughout 2018. The near limitless uses of graphite has the potential to impact every facet of our lives. Companies like Graphite Energy Corp. (OTC: GRXXF); (CSE: GRE) is at the forefront in this technological revolution.

For more information on Graphite Energy Corp. (OTC: GRXXF) (CSE: GRE), please visit streetsignals.com for a free research report.

Streetsignals.com (SS) is the source of the Article and content set forth above. References to any issuer other than the profiled issuer are intended solely to identify industry participants and do not constitute an endorsement of any issuer and do not constitute a comparison to the profiled issuer. FN Media Group (FNM) is a third-party publisher and news dissemination service provider, which disseminates electronic information through multiple online media channels. FNM is NOT affiliated with SS or any company mentioned herein. The commentary, views and opinions expressed in this release by SS are solely those of SS and are not shared by and do not reflect in any manner the views or opinions of FNM. Readers of this Article and content agree that they cannot and will not seek to hold liable SS and FNM for any investment decisions by their readers or subscribers. SS and FNM and their respective affiliated companies are a news dissemination and financial marketing solutions provider and are NOT registered broker-dealers/analysts/investment advisers, hold no investment licenses and may NOT sell, offer to sell or offer to buy any security.

The Article and content related to the profiled company represent the personal and subjective views of the Author (SS), and are subject to change at any time without notice. The information provided in the Article and the content has been obtained from sources which the Author believes to be reliable. However, the Author (SS) has not independently verified or otherwise investigated all such information. None of the Author, SS, FNM, or any of their respective affiliates, guarantee the accuracy or completeness of any such information. This Article and content are not, and should not be regarded as investment advice or as a recommendation regarding any particular security or course of action; readers are strongly urged to speak with their own investment advisor and review all of the profiled issuer’s filings made with the Securities and Exchange Commission before making any investment decisions and should understand the risks associated with an investment in the profiled issuer’s securities, including, but not limited to, the complete loss of your investment. FNM was not compensated by any public company mentioned herein to disseminate this press release but was compensated seventy six hundred dollars by SS, a non-affiliated third party to distribute this release on behalf of Graphite Energy Corp.

FNM HOLDS NO SHARES OF ANY COMPANY NAMED IN THIS RELEASE.

This release contains “forward-looking statements” within the meaning of Section 27A of the Securities Act of 1933, as amended, and Section 21E the Securities Exchange Act of 1934, as amended and such forward-looking statements are made pursuant to the safe harbor provisions of the Private Securities Litigation Reform Act of 1995. “Forward-looking statements” describe future expectations, plans, results, or strategies and are generally preceded by words such as “may”, “future”, “plan” or “planned”, “will” or “should”, “expected,” “anticipates”, “draft”, “eventually” or “projected”. You are cautioned that such statements are subject to a multitude of risks and uncertainties that could cause future circumstances, events, or results to differ materially from those projected in the forward-looking statements, including the risks that actual results may differ materially from those projected in the forward-looking statements as a result of various factors, and other risks identified in a company’s annual report on Form 10-K or 10-KSB and other filings made by such company with the Securities and Exchange Commission. You should consider these factors in evaluating the forward-looking statements included herein, and not place undue reliance on such statements. The forward-looking statements in this release are made as of the date hereof and SS and FNM undertake no obligation to update such statements.

Media Contact:

FN Media Group, LLC
info@marketnewsupdates.com
+1(561)325-8757

SOURCE MarketNewsUpdates.com

Hopefully my insertions of ‘Canada’ and the ‘United States’ help to clarify matters. North America and the United States are not synonyms although they are sometimes used synonymously.

There is another copy of this news release on Wall Street Online (Deutschland), both in English and German.By the way, that was my first clue that there might be some German interest. The second clue was the Graphite Energy Corp. homepage. Unusually for a company with ‘headquarters’ in the Canadian province of British Columbia, there’s an option to read the text in German.

Graphite Energy Corp. seems to be a relatively new player in the ‘rush’ to mine graphite flakes for use in graphene-based applications. One of my first posts about mining for graphite flakes was a July 26, 2011 posting concerning Northern Graphite and their mining operation (Bissett Creek) in Ontario. I don’t write about them often but they are still active if their news releases are to be believed. The latest was issued February 28, 2018 and offers “financial metrics for the Preliminary Economic Assessment (the “PEA”) on the Company’s 100% owned Bissett Creek graphite project.”

The other graphite mining company mentioned here is Lomiko Metals. The latest posting here about Lomiko is a December 23, 2015 piece regarding an analysis and stock price recommendation by a company known as SeeThruEquity. Like Graphite Energy Corp., Lomiko’s mines are located in Québec and their business headquarters in British Columbia. Lomiko has a March 16, 2018 news release announcing its reinstatement for trading on the TSX (Toronto Stock Exchange),

(Vancouver, B.C.) Lomiko Metals Inc. (“Lomiko”) (“Lomiko”) (TSX-V: LMR, OTC: LMRMF, FSE: DH8C) announces it has been successful in its reinstatement application with the TSX Venture Exchange and trading will begin at the opening on Tuesday, March 20, 2018.

Getting back to the flakes, here’s more about Graphite Energy Corp.’s mine (from the About Lac Aux Bouleaux webpage),

Lac Aux Bouleaux

The Lac Aux Bouleaux Property is comprised of 14 mineral claims in one contiguous block totaling 738.12 hectares land on NTS 31J05, near the town of Mont-Laurier in southern Québec. Lac Aux Bouleaux “LAB” is a world class graphite property that borders the only producing graphite in North America [Note: There are three countries in North America, Canada, the United States, and Mexico. Québec is in Canada.]. On the property we have a full production facility already built which includes an open pit mine, processing facility, tailings pond, power and easy access to roads.

High Purity Levels

An important asset of LAB is its metallurgy. The property contains a high proportion of large and jumbo flakes from which a high purity concentrate was proven to be produced across all flakes by a simple flotation process. The concentrate can then be further purified using the province’s green and affordable hydro-electricity to be used in lithium-ion batteries.

The geological work performed in order to verify the existing data consisted of visiting approachable graphite outcrops, historical exploration and development work on the property. Large flake graphite showings located on the property were confirmed with flake size in the range of 0.5 to 2 millimeters, typically present in shear zones at the contact of gneisses and marbles where the graphite content usually ranges from 2% to 20%. The results of the property are outstanding showing to have jumbo flake natural graphite.

An onsite mill structure, a tailing dam facility, and a historical open mining pit is already present and constructed on the property. The property is ready to be put into production based on the existing infrastructure already built. The company would hope to be able to ship by rail its mined graphite directly to Teslas Gigafactory being built in Nevada [United States] which will produce 35GWh of batteries annually by 2020.

Adjacent Properties

The property is located in a very active graphite exploration and production area, adjacent to the south of TIMCAL’s Lac des Iles graphite mine in Quebec which is a world class deposit producing 25,000 tonnes of graphite annually. There are several graphite showings and past producing mines in its vicinity, including a historic deposit located on the property.

The open pit mine in operation since 1989 with an onsite plant ranked 5th in the world production of graphite. The mine is operated by TIMCAL Graphite & Carbon which is a subsidiary of Imerys S.A., a French multinational company. The mine has an average grade of 7.5% Cg (graphite carbon) and has been producing 50 different graphite products for various graphite end users around the globe.

Canadians! We have great flakes!

Ora Sound, a Montréal-based startup, and its ‘graphene’ headphones

For all the excitement about graphene there aren’t that many products as Glenn Zorpette notes in a June 20, 2017 posting about Ora Sound and its headphones on the Nanoclast blog (on the IEEE [Institute of Electrical and Electronics Engineers] website; Note: Links have been removed),

Graphene has long been touted as a miracle material that would deliver everything from tiny, ultralow-power transistors to the vastly long and ultrastrong cable [PDF] needed for a space elevator. And yet, 13 years of graphene development, and R&D expenditures well in the tens of billions of dollars have so far yielded just a handful of niche products. The most notable by far is a line of tennis racquets in which relatively small amounts of graphene are used to stiffen parts of the frame.

Ora Sound, a Montreal-based [Québec, Canada] startup, hopes to change all that. On 20 June [2017], it unveiled a Kickstarter campaign for a new audiophile-grade headphone that uses cones, also known as membranes, made of a form of graphene. “To the best of our knowledge, we are the first company to find a significant, commercially viable application for graphene,” says Ora cofounder Ari Pinkas, noting that the cones in the headphones are 95 percent graphene.

Kickstarter

It should be noted that participating in a Kickstarter campaign is an investment/gamble. I am not endorsing Ora Sound or its products. That said, this does look interesting (from the ORA: The World’s First Graphene Headphones Kickstarter campaign webpage),

ORA GQ Headphones uses nanotechnology to deliver the most groundbreaking audio listening experience. Scientists have long promised that one day Graphene will find its way into many facets of our lives including displays, electronic circuits and sensors. ORA’s Graphene technology makes it one of the first companies to have created a commercially viable application for this Nobel-prize winning material, a major scientific achievement.

The GQ Headphones come equipped with ORA’s patented GrapheneQ™ membranes, providing unparalleled fidelity. The headphones also offer all the features you would expect from a high-end audio product: wired/wireless operation, a gesture control track-pad, a digital MEMS microphone, breathable lambskin leather and an ear-shaped design optimized for sound quality and isolated comfort.

They have produced a slick video to promote their campaign,

At the time of publishing this post, the campaign will run for another eight days and has raised $650,949 CAD. This is more than $500,000 dollars over the company’s original goal of $135,000. I’m sure they’re ecstatic but this success can be a mixed blessing. They have many more people expecting a set of headphones than they anticipated and that can mean production issues.

Further, there appears to be only one member of the team with business experience and his (Ari Pinkas) experience includes marketing strategy for a few years and then founding an online marketplace for teachers. I would imagine Pinkas will be experiencing a very steep learning curve. Hopefully, Helge Seetzen, a member of the company’s advisory board will be able to offer assistance. According to Seetzen’s Wikipedia entry, he is a “… German technologist and businessman known for imaging & multimedia research and commercialization,” as well as, having a Canadian educational background and business experience. The rest of the team and advisory board appear to be academics.

The technology

A March 14, 2017 article by Andy Riga for the Montréal Gazette gives a general description of the technology,

A Montreal startup is counting on technology sparked by a casual conversation between two brothers pursuing PhDs at McGill University.

They were chatting about their disparate research areas — one, in engineering, was working on using graphene, a form of carbon, in batteries; the other, in music, was looking at the impact of electronics on the perception of audio quality.

At first glance, the invention that ensued sounds humdrum.

It’s a replacement for an item you use every day. It’s paper thin, you probably don’t realize it’s there and its design has not changed much in more than a century. Called a membrane or diaphragm, it’s the part of a loudspeaker that vibrates to create the sound from the headphones over your ears, the wireless speaker on your desk, the cellphone in your hand.

Membranes are normally made of paper, Mylar or aluminum.

Ora’s innovation uses graphene, a remarkable material whose discovery garnered two scientists the 2010 Nobel Prize in physics but which has yet to fulfill its promise.

“Because it’s so stiff, our membrane gets better sound quality,” said Robert-Eric Gaskell, who obtained his PhD in sound recording in 2015. “It can produce more sound with less distortion, and the sound that you hear is more true to the original sound intended by the artist.

“And because it’s so light, we get better efficiency — the lighter it is, the less energy it takes.”

In January, the company demonstrated its membrane in headphones at the Consumer Electronics Show, a big trade convention in Las Vegas.

Six cellphone manufacturers expressed interest in Ora’s technology, some of which are now trying prototypes, said Ari Pinkas, in charge of product marketing at Ora. “We’re talking about big cellphone manufacturers — big, recognizable names,” he said.

Technology companies are intrigued by the idea of using Ora’s technology to make smaller speakers so they can squeeze other things, such as bigger batteries, into the limited space in electronic devices, Pinkas said. Others might want to use Ora’s membrane to allow their devices to play music louder, he added.

Makers of regular speakers, hearing aids and virtual-reality headsets have also expressed interest, Pinkas said.

Ora is still working on headphones.

Riga’s article offers a good overview for people who are not familiar with graphene.

Zorpette’s June 20, 2017 posting (on Nanoclast) offers a few more technical details (Note: Links have been removed),

During an interview and demonstration in the IEEE Spectrum offices, Pinkas and Robert-Eric Gaskell, another of the company’s cofounders, explained graphene’s allure to audiophiles. “Graphene has the ideal properties for a membrane,” Gaskell says. “It’s incredibly stiff, very lightweight—a rare combination—and it’s well damped,” which means it tends to quell spurious vibrations. By those metrics, graphene soundly beats all the usual choices: mylar, paper, aluminum, or even beryllium, Gaskell adds.

The problem is making it in sheets large enough to fashion into cones. So-called “pristine” graphene exists as flakes, [emphasis mine] perhaps 10 micrometers across, and a single atom thick. To make larger, strong sheets of graphene, researchers attach oxygen atoms to the flakes, and then other elements to the oxygen atoms to cross-link the flakes and hold them together strongly in what materials scientists call a laminate structure. The intellectual property behind Ora’s advance came from figuring out how to make these structures suitably thick and in the proper shape to function as speaker cones, Gaskell says. In short, he explains, the breakthrough was, “being able to manufacture” in large numbers, “and in any geometery we want.”

Much of the R&D work that led to Ora’s process was done at nearby McGill University, by professor Thomas Szkopek of the Electrical and Computer Engineering department. Szkopek worked with Peter Gaskell, Robert-Eric’s younger brother. Ora is also making use of patents that arose from work done on graphene by the Nguyen Group at Northwestern University, in Evanston, Ill.

Robert-Eric Gaskell and Pinkas arrived at Spectrum with a preproduction model of their headphones, as well as some other headphones for the sake of comparison. The Ora prototype is clearly superior to the comparison models, but that’s not much of a surprise. …

… In the 20 minutes or so I had to audition Ora’s preproduction model, I listened to an assortment of classical and jazz standards and I came away impressed. The sound is precise, with fine details sharply rendered. To my surprise, I was reminded of planar-magnetic type headphones that are now surging in popularity in the upper reaches of the audiophile headphone market. Bass is smooth and tight. Overall, the unit holds up quite well against closed-back models in the $400 to $500 range I’ve listened to from Grado, Bowers & Wilkins, and Audeze.

Ora’s Kickstarter campaign page (Graphene vs GrapheneQ subsection) offers some information about their unique graphene composite,

A TECHNICAL INTRODUCTION TO GRAPHENE

Graphene is a new material, first isolated only 13 years ago. Formed from a single layer of carbon atoms, Graphene is a hexagonal crystal lattice in a perfect honeycomb structure. This fundamental geometry makes Graphene ridiculously strong and lightweight. In its pure form, Graphene is a single atomic layer of carbon. It can be very expensive and difficult to produce in sizes any bigger than small flakes. These challenges have prevented pristine Graphene from being integrated into consumer technologies.

THE GRAPHENEQ™ SOLUTION

At ORA, we’ve spent the last few years creating GrapheneQ, our own, proprietary Graphene-based nanocomposite formulation. We’ve specifically designed and optimized it for use in acoustic transducers. GrapheneQ is a composite material which is over 95% Graphene by weight. It is formed by depositing flakes of Graphene into thousands of layers that are bonded together with proprietary cross-linking agents. Rather than trying to form one, continuous layer of Graphene, GrapheneQ stacks flakes of Graphene together into a laminate material that preserves the benefits of Graphene while allowing the material to be formed into loudspeaker cones.

Scanning Electron Microscope (SEM) Comparison
Scanning Electron Microscope (SEM) Comparison

If you’re interested in more technical information on sound, acoustics, soundspeakers, and Ora’s graphene-based headphones, it’s all there on Ora’s Kickstarter campaign page.

The Québec nanotechnology scene in context and graphite flakes for graphene

There are two Canadian provinces that are heavily invested in nanotechnology research and commercialization efforts. The province of Québec has poured money into their nanotechnology efforts, while the province of Alberta has also invested heavily in nanotechnology, it has also managed to snare additional federal funds to host Canada’s National Institute of Nanotechnology (NINT). (This appears to be a current NINT website or you can try this one on the National Research Council website). I’d rank Ontario as being a third centre with the other provinces being considerably less invested. As for the North, I’ve not come across any nanotechnology research from that region. Finally, as I stumble more material about nanotechnology in Québec than I do for any other province, that’s the reason I rate Québec as the most successful in its efforts.

Regarding graphene, Canada seems to have an advantage. We have great graphite flakes for making graphene. With mines in at least two provinces, Ontario and Québec, we have a ready source of supply. In my first posting (July 25, 2011) about graphite mines here, I had this,

Who knew large flakes could be this exciting? From the July 25, 2011 news item on Nanowerk,

Northern Graphite Corporation has announced that graphene has been successfully made on a test basis using large flake graphite from the Company’s Bissett Creek project in Northern Ontario. Northern’s standard 95%C, large flake graphite was evaluated as a source material for making graphene by an eminent professor in the field at the Chinese Academy of Sciences who is doing research making graphene sheets larger than 30cm2 in size using the graphene oxide methodology. The tests indicated that graphene made from Northern’s jumbo flake is superior to Chinese powder and large flake graphite in terms of size, higher electrical conductivity, lower resistance and greater transparency.

Approximately 70% of production from the Bissett Creek property will be large flake (+80 mesh) and almost all of this will in fact be +48 mesh jumbo flake which is expected to attract premium pricing and be a better source material for the potential manufacture of graphene. The very high percentage of large flakes makes Bissett Creek unique compared to most graphite deposits worldwide which produce a blend of large, medium and small flakes, as well as a large percentage of low value -150 mesh flake and amorphous powder which are not suitable for graphene, Li ion batteries or other high end, high growth applications.

Since then I’ve stumbled across more information about Québec’s mines than Ontario’s  as can be seen:

There are some other mentions of graphite mines in other postings but they are tangential to what’s being featured:

  • (my Oct. 26, 2015 posting about St. Jean Carbon and its superconducting graphene and
  • my Feb. 20, 2015 posting about Nanoxplore and graphene production in Québec; and
  • this Feb. 23, 2015 posting about Grafoid and its sister company, Focus Graphite which gets its graphite flakes from a deposit in the northeastern part of Québec).

 

After reviewing these posts, I’ve begun to wonder where Ora’s graphite flakes come from? In any event, I wish the folks at Ora and their Kickstarter funders the best of luck.

Artificial intelligence (AI) company (in Montréal, Canada) attracts $135M in funding from Microsoft, Intel, Nvidia and others

It seems there’s a push on to establish Canada as a centre for artificial intelligence research and, if the federal and provincial governments have their way, for commercialization of said research. As always, there seems to be a bit of competition between Toronto (Ontario) and Montréal (Québec) as to which will be the dominant hub for the Canadian effort if one is to take Braga’s word for the situation.

In any event, Toronto seemed to have a mild advantage over Montréal initially with the 2017 Canadian federal government  budget announcement that the Canadian Institute for Advanced Research (CIFAR), based in Toronto, would launch a Pan-Canadian Artificial Intelligence Strategy and with an announcement from the University of Toronto shortly after (from my March 31, 2017 posting),

On the heels of the March 22, 2017 federal budget announcement of $125M for a Pan-Canadian Artificial Intelligence Strategy, the University of Toronto (U of T) has announced the inception of the Vector Institute for Artificial Intelligence in a March 28, 2017 news release by Jennifer Robinson (Note: Links have been removed),

A team of globally renowned researchers at the University of Toronto is driving the planning of a new institute staking Toronto’s and Canada’s claim as the global leader in AI.

Geoffrey Hinton, a University Professor Emeritus in computer science at U of T and vice-president engineering fellow at Google, will serve as the chief scientific adviser of the newly created Vector Institute based in downtown Toronto.

“The University of Toronto has long been considered a global leader in artificial intelligence research,” said U of T President Meric Gertler. “It’s wonderful to see that expertise act as an anchor to bring together researchers, government and private sector actors through the Vector Institute, enabling them to aim even higher in leading advancements in this fast-growing, critical field.”

As part of the Government of Canada’s Pan-Canadian Artificial Intelligence Strategy, Vector will share $125 million in federal funding with fellow institutes in Montreal and Edmonton. All three will conduct research and secure talent to cement Canada’s position as a world leader in AI.

However, Montréal and the province of Québec are no slouches when it comes to supporting to technology. From a June 14, 2017 article by Matthew Braga for CBC (Canadian Broadcasting Corporation) news online (Note: Links have been removed),

One of the most promising new hubs for artificial intelligence research in Canada is going international, thanks to a $135 million investment with contributions from some of the biggest names in tech.

The company, Montreal-based Element AI, was founded last October [2016] to help companies that might not have much experience in artificial intelligence start using the technology to change the way they do business.

It’s equal parts general research lab and startup incubator, with employees working to develop new and improved techniques in artificial intelligence that might not be fully realized for years, while also commercializing products and services that can be sold to clients today.

It was co-founded by Yoshua Bengio — one of the pioneers of a type of AI research called machine learning — along with entrepreneurs Jean-François Gagné and Nicolas Chapados, and the Canadian venture capital fund Real Ventures.

In an interview, Bengio and Gagné said the money from the company’s funding round will be used to hire 250 new employees by next January. A hundred will be based in Montreal, but an additional 100 employees will be hired for a new office in Toronto, and the remaining 50 for an Element AI office in Asia — its first international outpost.

They will join more than 100 employees who work for Element AI today, having left jobs at Amazon, Uber and Google, among others, to work at the company’s headquarters in Montreal.

The expansion is a big vote of confidence in Element AI’s strategy from some of the world’s biggest technology companies. Microsoft, Intel and Nvidia all contributed to the round, and each is a key player in AI research and development.

The company has some not unexpected plans and partners (from the Braga, article, Note: A link has been removed),

The Series A round was led by Data Collective, a Silicon Valley-based venture capital firm, and included participation by Fidelity Investments Canada, National Bank of Canada, and Real Ventures.

What will it help the company do? Scale, its founders say.

“We’re looking at domain experts, artificial intelligence experts,” Gagné said. “We already have quite a few, but we’re looking at people that are at the top of their game in their domains.

“And at this point, it’s no longer just pure artificial intelligence, but people who understand, extremely well, robotics, industrial manufacturing, cybersecurity, and financial services in general, which are all the areas we’re going after.”

Gagné says that Element AI has already delivered 10 projects to clients in those areas, and have many more in development. In one case, Element AI has been helping a Japanese semiconductor company better analyze the data collected by the assembly robots on its factory floor, in a bid to reduce manufacturing errors and improve the quality of the company’s products.

There’s more to investment in Québec’s AI sector than Element AI (from the Braga article; Note: Links have been removed),

Element AI isn’t the only organization in Canada that investors are interested in.

In September, the Canadian government announced $213 million in funding for a handful of Montreal universities, while both Google and Microsoft announced expansions of their Montreal AI research groups in recent months alongside investments in local initiatives. The province of Quebec has pledged $100 million for AI initiatives by 2022.

Braga goes on to note some other initiatives but at that point the article’s focus is exclusively Toronto.

For more insight into the AI situation in Québec, there’s Dan Delmar’s May 23, 2017 article for the Montreal Express (Note: Links have been removed),

Advocating for massive government spending with little restraint admittedly deviates from the tenor of these columns, but the AI business is unlike any other before it. [emphasis misn] Having leaders acting as fervent advocates for the industry is crucial; resisting the coming technological tide is, as the Borg would say, futile.

The roughly 250 AI researchers who call Montreal home are not simply part of a niche industry. Quebec’s francophone character and Montreal’s multilingual citizenry are certainly factors favouring the development of language technology, but there’s ample opportunity for more ambitious endeavours with broader applications.

AI isn’t simply a technological breakthrough; it is the technological revolution. [emphasis mine] In the coming decades, modern computing will transform all industries, eliminating human inefficiencies and maximizing opportunities for innovation and growth — regardless of the ethical dilemmas that will inevitably arise.

“By 2020, we’ll have computers that are powerful enough to simulate the human brain,” said (in 2009) futurist Ray Kurzweil, author of The Singularity Is Near, a seminal 2006 book that has inspired a generation of AI technologists. Kurzweil’s projections are not science fiction but perhaps conservative, as some forms of AI already effectively replace many human cognitive functions. “By 2045, we’ll have expanded the intelligence of our human-machine civilization a billion-fold. That will be the singularity.”

The singularity concept, borrowed from physicists describing event horizons bordering matter-swallowing black holes in the cosmos, is the point of no return where human and machine intelligence will have completed their convergence. That’s when the machines “take over,” so to speak, and accelerate the development of civilization beyond traditional human understanding and capability.

The claims I’ve highlighted in Delmar’s article have been made before for other technologies, “xxx is like no other business before’ and “it is a technological revolution.”  Also if you keep scrolling down to the bottom of the article, you’ll find Delmar is a ‘public relations consultant’ which, if you look at his LinkedIn profile, you’ll find means he’s a managing partner in a PR firm known as Provocateur.

Bertrand Marotte’s May 20, 2017 article for the Montreal Gazette offers less hyperbole along with additional detail about the Montréal scene (Note: Links have been removed),

It might seem like an ambitious goal, but key players in Montreal’s rapidly growing artificial-intelligence sector are intent on transforming the city into a Silicon Valley of AI.

Certainly, the flurry of activity these days indicates that AI in the city is on a roll. Impressive amounts of cash have been flowing into academia, public-private partnerships, research labs and startups active in AI in the Montreal area.

…, researchers at Microsoft Corp. have successfully developed a computing system able to decipher conversational speech as accurately as humans do. The technology makes the same, or fewer, errors than professional transcribers and could be a huge boon to major users of transcription services like law firms and the courts.

Setting the goal of attaining the critical mass of a Silicon Valley is “a nice point of reference,” said tech entrepreneur Jean-François Gagné, co-founder and chief executive officer of Element AI, an artificial intelligence startup factory launched last year.

The idea is to create a “fluid, dynamic ecosystem” in Montreal where AI research, startup, investment and commercialization activities all mesh productively together, said Gagné, who founded Element with researcher Nicolas Chapados and Université de Montréal deep learning pioneer Yoshua Bengio.

“Artificial intelligence is seen now as a strategic asset to governments and to corporations. The fight for resources is global,” he said.

The rise of Montreal — and rival Toronto — as AI hubs owes a lot to provincial and federal government funding.

Ottawa promised $213 million last September to fund AI and big data research at four Montreal post-secondary institutions. Quebec has earmarked $100 million over the next five years for the development of an AI “super-cluster” in the Montreal region.

The provincial government also created a 12-member blue-chip committee to develop a strategic plan to make Quebec an AI hub, co-chaired by Claridge Investments Ltd. CEO Pierre Boivin and Université de Montréal rector Guy Breton.

But private-sector money has also been flowing in, particularly from some of the established tech giants competing in an intense AI race for innovative breakthroughs and the best brains in the business.

Montreal’s rich talent pool is a major reason Waterloo, Ont.-based language-recognition startup Maluuba decided to open a research lab in the city, said the company’s vice-president of product development, Mohamed Musbah.

“It’s been incredible so far. The work being done in this space is putting Montreal on a pedestal around the world,” he said.

Microsoft struck a deal this year to acquire Maluuba, which is working to crack one of the holy grails of deep learning: teaching machines to read like the human brain does. Among the company’s software developments are voice assistants for smartphones.

Maluuba has also partnered with an undisclosed auto manufacturer to develop speech recognition applications for vehicles. Voice recognition applied to cars can include such things as asking for a weather report or making remote requests for the vehicle to unlock itself.

Marotte’s Twitter profile describes him as a freelance writer, editor, and translator.