The October 2024 issue of The Advance (Council of Canadian Academies [CCA] newsletter) arrived in my emailbox on October 15, 2024 with some interesting tidbits about artificial intelligence, Note: For anyone who wants to see the entire newsletter for themselves, you can sign up here or in French, vous pouvez vous abonner ici,
Artificial Intelligence and Canada’s Science Diplomacy Future
For nearly two decades, Canada has been a global leader in artificial intelligence (AI) research, contributing a significant percentage of the world’s top-cited scientific publications on the subject. In that time, the number of countries participating in international collaborations has grown significantly, supporting new partnerships and accounting for as much as one quarter of all published research articles.
“Opportunities for partnerships are growing rapidly alongside the increasing complexity of new scientific discoveries and emerging industry sectors,” wrote the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships earlier this year, singling out Canada’s AI expertise. “At the same time, discussions of sovereignty and national interests abut the movement toward open science and transdisciplinary approaches.”
On Friday, November 22 [2024], the CCA will host “Strategy and Influence: AI and Canada’s Science Diplomacy Future” as part of the Canadian Science Policy Centre (CSPC) annual conference. The panel discussion will draw on case studies related to AI research collaboration to explore the ways in which such partnerships inform science diplomacy. Panellists include:
Monica Gattinger, chair of the CCA Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships and director of the Institute for Science, Society and Policy at the University of Ottawa (picture omitted)
David Barnes, head of the British High Commission Science, Climate, and Energy Team
Constanza Conti, Professor of Numerical Analysis at the University of Florence and Scientific Attaché at the Italian Embassy in Ottawa
Jean-François Doulet, Attaché for Science and Higher Education at the Embassy of France in Canada
Konstantinos Kapsouropoulos, Digital and Research Counsellor at the Delegation of the European Union to Canada
For details on CSPC 2024, click here. [Here’s the theme and a few more details about the conference: Empowering Society: The Transformative Value of Science, Knowledge, and Innovation; The 16th annual Canadian Science Policy Conference (CSPC) will be held in person from November 20th to 22nd, 2024] For a user guide to Navigating Collaborative Futures, from the CCA’s Expert Panel on International Science, Technology, Innovation and Knowledge Partnerships, click here.
448: Strategy and Influence: AI and Canada’s Science Diplomacy Future
Friday, November 22 [2024] 1:00 pm – 2:30 pm EST
Science and International Affairs and Security
About
Organized By: Council of Canadian Academies (CCA)
Artificial intelligence has already begun to transform Canada’s economy and society, and the broader advantages of international collaboration in AI research have the potential to make an even greater impact. With three national AI institutes and a Pan-Canadian AI Strategy, Canada’s AI ecosystem is thriving and positions the country to build stronger international partnerships in this area, and to develop more meaningful international collaborations in other areas of innovation. This panel will convene science attachés to share perspectives on science diplomacy and partnerships, drawing on case studies related to AI research collaboration.
The newsletter also provides links to additional readings on various topics, here are the AI items,
In Ottawa, Prime Minister Justin Trudeau and President Emmanuel Macron of France renewed their commitment “to strengthening economic exchanges between Canadian and French AI ecosystems.” They also revealed that Canada would be named Country of the Year at Viva Technology’s annual conference, to be held next June in Paris.
A “slower, but more capable” version of OpenAI is impressing scientists with the strength of its responses to prompts, according to Nature. The new version, referred to as “o1,” outperformed a previous ChatGPT model on a standardized test involving chemistry, physics, and biology questions, and “beat PhD-level scholars on the hardest series of questions.” [Note: As of October 16, 2024, the Nature news article of October 1, 2024 appears to be open access. It’s unclear how long this will continue to be the case.]
…
In memoriam: Abhishek Gupta, the founder and principal researcher of the Montreal AI Ethics Institute and a member of the CCA Expert Panel on Artificial Intelligence for Science and Engineering, died on September 30 [2024]. His colleagues shared the news in a memorial post, writing, “It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.”
Meeting in Ottawa on September 26, 2024, Justin Trudeau, the Prime Minister of Canada, and Emmanuel Macron, the President of the French Republic, issued a call to action to promote the development of a responsible approach to artificial intelligence (AI).
Our two countries will increase the coordination of our actions, as Canada will assume the Presidency of the G7 in 2025 and France will host the AI Action Summit on February 10 and 11, 2025.
Our two countries are working on the development and use of safe, secure and trustworthy AI as part of a risk-aware, human-centred and innovation-friendly approach. This cooperation is based on shared values. We believe that the development and use of AI need to be beneficial for individuals and the planet, for example by increasing human capabilities and developing creativity, ensuring the inclusion of under-represented people, reducing economic, social, gender and other inequalities, protecting information integrity and protecting natural environments, which in turn will promote inclusive growth, well-being, sustainable development and environmental sustainability.
We are committed to promoting the development and use of AI systems that respect the rule of law, human rights, democratic values and human-centred values. Respecting these values means developing and using AI systems that are transparent and explainable, robust, safe and secure, and whose stakeholders are held accountable for respecting these principles, in line with the Recommendation of the OECD Council on Artificial Intelligence, the Hiroshima AI Process, the G20 AI Principles and the International Partnership for Information and Democracy.
Based on these values and principles, Canada and France are working on high-quality scientific cooperation. In April 2023, we formalized the creation of a joint committee for science, technology and innovation. This committee has identified emerging technologies, including AI, as one of the priorities areas for cooperation between our two countries. In this context, a call for AI research projects was announced last July, scheduled for the end of 2024 and funded, on the French side, by the French National Research Agency, and, on the Canadian side, by a consortium made up of Canada’s three granting councils (the Natural Sciences and Engineering Research Council of Canada, the Social Sciences and Humanities Research Council of Canada and the Canadian Institutes of Health Research) and IVADO [Institut de valorisation des données], the AI research, training and transfer consortium.
We will also collaborate on the evaluation and safety of AI models. We have announced key AI safety initiatives, including the AI Safety Institute of Canada [emphasis mine; not to be confused with Artificial Intelligence Governance & Safety Canada (AIGS)], which will be launched soon, and France’s National Centre for AI evaluation. We expect these two agencies will work to improve knowledge and understanding of technical and socio-technical aspects related to the safety and evaluation of advanced AI systems.
Canada and France are committed to strengthening economic exchanges between Canadian and French AI ecosystems, whether by organizing delegations, like the one organized by Scale AI with 60 Canadian companies at the latest Viva Technology conference in Paris, or showcasing France at the ALL IN event in Montréal on September 11 and 12, 2024, through cooperation between companies, for example, through large companies’ adoption of services provided by small companies or through the financial support that investment funds provide to companies on both sides of the Atlantic. Our two countries will continue their cooperation at the upcoming Viva Technology conference in Paris, where Canada will be the Country of the Year.
We want to strengthen our cooperation in terms of developing AI capabilities. We specifically want to promote access to AI’s compute capabilities in order to support national and international technological advances in research and business, notably in emerging markets and developing countries, while committing to strengthening their efforts to make the necessary improvements to the energy efficiency of these infrastructures. We are also committed to sharing their experience in initiatives to develop AI skills and training in order to accelerate workforce deployment.
Canada and France cooperate on the international stage to ensure the alignment and convergence of AI regulatory frameworks, given the economic potential and the global social consequences of this technological revolution. Under our successive G7 presidencies in 2018 and 2019, we worked to launch the Global Partnership on Artificial Intelligence (GPAI), which now has 29 members from all over the world, and whose first two centres of expertise were opened in Montréal and Paris. We support the creation of the new integrated partnership, which brings together OECD and GPAI member countries, and welcomes new members, including emerging and developing economies. We hope that the implementation of this new model will make it easier to participate in joint research projects that are of public interest, reduce the global digital divide and support constructive debate between the various partners on standards and the interoperability of their AI-related regulations.
We will continue our cooperation at the AI Action Summit in France on February 10 and 11, 2025, where we will strive to find solutions to meet our common objectives, such as the fight against disinformation or the reduction of the environmental impact of AI. With the objective of actively and tangibly promoting the use of the French language in the creation, production, distribution and dissemination of AI, taking into account its richness and diversity, and in compliance with copyright, we will attempt to identify solutions that are in line with the five themes of the summit: AI that serves the public interest, the future of work, innovation and culture, trust in AI and global AI governance.
Canada has accepted to co-chair the working group on global AI governance in order to continue the work already carried out by the GPAI, the OECD, the United Nations and its various bodies, the G7 and the G20. We would like to highlight and advance debates on the cultural challenges of AI in order to accelerate the joint development of relevant responses to the challenges faced. We would also like to develop the change management policies needed to support all of the affected cultural sectors. We will continue these discussions together during our successive G7 presidencies in 2025 and 2026.
I checked out the In memoriam notice for Abhishek Gupta and found this, Note: Links have been removed except the link to the Abhishek Gupta’s memorial page hosting tributes, stories, and more. The link is in the highlighted paragraph,
Honoring the Life and Legacy of a Leader in AI Ethics
In accordance with his family’s wishes, it is with profound sadness that we announce the passing of Abhishek Gupta, Founder and Principal Researcher of the Montreal AI Ethics Institute (MAIEI), Director for Responsible AI at the Boston Consulting Group (BCG), and a pioneering voice in the field of AI ethics. Abhishek passed away peacefully in his sleep on September 30, 2024 in India, surrounded by his loving family. He is survived by his father, Ashok Kumar Gupta; his mother, Asha Gupta; and his younger brother, Abhijay Gupta.
Note: Details of a memorial service will be announced in the coming weeks. For those who wish to share stories, personal anecdotes, and photos of Abhishek, please visit www.forevermissed.com/abhishekgupta — your contributions will be greatly appreciated by his family and loved ones.
Born on December 20, 1992, in India, Abhishek’s intellectual curiosity and drive to understand technology led him on a remarkable journey. After excelling at Delhi Public School, Abhishek attended McGill University in Montreal, where he earned a Bachelor of Science in Computer Science (BSc’15). Following his graduation, Abhishek worked as a software engineer at Ericsson. He later joined Microsoft as a machine learning engineer, where he also served on the CSE Responsible AI Board. It was during his time in Montreal that Abhishek envisioned a future where ethics and AI would intertwine—a vision that became the driving force behind his life’s work.
The Beginnings: Building a Global AI Ethics Community
Abhishek’s vision for MAIEI was rooted in community building. He began hosting in-person AI Ethics Meetups in Montreal throughout 2017. These gatherings were unique—participants completed assigned readings in advance, split into small groups for discussion, and then reconvened to share insights. This approach fostered deep, structured conversations and made AI ethics accessible to everyone, regardless of their background. The conversations and insights from these meetups became the foundation of MAIEI, which was launched in May 2018.
When the pandemic hit, Abhishek adapted the meetup format to an online setting, enabling MAIEI to expand worldwide. It was his idea to bring these conversations to a global stage, using virtual platforms to ensure voices from all corners of the world could join in. He passionately stood up for the “little guy,” making sure that those whose voices might be overlooked or unheard in traditional forums had a platform. Under his stewardship, MAIEI emerged as a globally recognized leader in fostering public discussions on the ethical implications of artificial intelligence. Through MAIEI, Abhishek fulfilled his mission of democratizing AI ethics literacy, empowering individuals from all backgrounds to engage with the future of technology.
…
I offer my sympathies to his family, friends, and communities for their profound loss.
It’s a bit disconcerting to think that one might be resurrected, in this case, digitally, but Dr Masaki Iwasaki has helpfully published a study on attitudes to digital cloning and resurrection consent, which could prove helpful when establishing one’s final wishes.
In a 2014 episode of sci-fi series Black Mirror, a grieving young widow reconnects with her dead husband using an app that trawls his social media history to mimic his online language, humor and personality. It works. She finds solace in the early interactions – but soon wants more.
Such a scenario is no longer fiction. In 2017, the company Eternime aimed to create an avatar of a dead person using their digital footprint, but this “Skype for the dead” didn’t catch on. The machine-learning and AI algorithms just weren’t ready for it. Neither were we.
Now, in 2024, amid exploding use of Chat GPT-like programs, similar efforts are on the way. But should digital resurrection be allowed at all? And are we prepared for the legal battles over what constitutes consent?
In a study published in the Asian Journal of Law and Economics, Dr Masaki Iwasaki of Harvard Law School and currently an assistant professor at Seoul National University, explores how the deceased’s consent (or otherwise) affects attitudes to digital resurrection.
US adults were presented with scenarios where a woman in her 20s dies in a car accident. A company offers to bring a digital version of her back, but her consent is, at first, ambiguous. What should her friends decide?
Two options – one where the deceased has consented to digital resurrection and another where she hasn’t – were read by participants at random. They then answered questions about the social acceptability of bringing her back on a five-point rating scale, considering other factors such as ethics and privacy concerns.
Results showed that expressed consent shifted acceptability two points higher compared to dissent. “Although I expected societal acceptability for digital resurrection to be higher when consent was expressed, the stark difference in acceptance rates – 58% for consent versus 3% for dissent – was surprising,” says Iwasaki. “This highlights the crucial role of the deceased’s wishes in shaping public opinion on digital resurrection.”
In fact, 59% of respondents disagreed with their own digital resurrection, and around 40% of respondents did not find any kind of digital resurrection socially acceptable, even with expressed consent. “While the will of the deceased is important in determining the societal acceptability of digital resurrection, other factors such as ethical concerns about life and death, along with general apprehension towards new technology are also significant,” says Iwasaki.
The results reflect a discrepancy between existing law and public sentiment. People’s general feelings – that the dead’s wishes should be respected – are actually not protected in most countries. The digitally recreated John Lennon in the film Forrest Gump, or animated hologram of Amy Winehouse reveal the ‘rights’ of the dead are easily overridden by those in the land of the living.
So, is your digital destiny something to consider when writing your will? It probably should be but in the current absence of clear legal regulations on the subject, the effectiveness of documenting your wishes in such a way is uncertain. For a start, how such directives are respected varies by legal jurisdiction. “But for those with strong preferences documenting their wishes could be meaningful,” says Iwasaki. “At a minimum, it serves as a clear communication of one’s will to family and associates, and may be considered when legal foundations are better established in the future.”
It’s certainly a conversation worth having now. Many generative AI chatbot services, such as like Replika (“The AI companion who cares”) and Project December (“Simulate the dead”) already enable conversations with chatbots replicating real people’s personalities. The service ‘You, Only Virtual’ (YOV) allows users to upload someone’s text messages, emails and voice conversations to create a ‘versona’ chatbot. And, in 2020, Microsoft obtained a patent to create chatbots from text, voice and image data for living people as well as for historical figures and fictional characters, with the option of rendering in 2D or 3D.
Iwasaki says he’ll investigate this and the digital resurrection of celebrities in future research. “It’s necessary first to discuss what rights should be protected, to what extent, then create rules accordingly,” he explains. “My research, building upon prior discussions in the field, argues that the opt-in rule requiring the deceased’s consent for digital resurrection might be one way to protect their rights.”
There is a link to the study in the press release above but this includes a citation, of sorts,
After seeing the description for Laura U. Marks’s recent work ‘Streaming Carbon Footprint’ (in my October 13, 2023 posting about upcoming ArtSci Salon events in Toronto), where she focuses on the environmental impact of streaming media and digital art, I was reminded of some September 2023 news.
A September 9, 2023 news item (an Associated Press article by Matt O’Brien and Hannah Fingerhut) on phys.org and also published September 12, 2023 on the Iowa Public Radio website, describe an unexpected cost for building ChatGPT and other AI agents, Note: Links have been removed,
The cost of building an artificial intelligence product like ChatGPT can be hard to measure.
But one thing Microsoft-backed OpenAI needed for its technology was plenty of water [emphases mine], pulled from the watershed of the Raccoon and Des Moines rivers in central Iowa to cool a powerful supercomputer as it helped teach its AI systems how to mimic human writing.
As they race to capitalize on a craze for generative AI, leading tech developers including Microsoft, OpenAI and Google have acknowledged that growing demand for their AI tools carries hefty costs, from expensive semiconductors to an increase in water consumption.
But they’re often secretive about the specifics. Few people in Iowa knew about its status as a birthplace of OpenAI’s most advanced large language model, GPT-4, before a top Microsoft executive said in a speech it “was literally made next to cornfields west of Des Moines.”
…
In its latest environmental report, Microsoft disclosed that its global water consumption spiked 34% from 2021 to 2022 (to nearly 1.7 billion gallons , or more than 2,500 Olympic-sized swimming pools), a sharp increase compared to previous years that outside researchers tie to its AI research. [emphases mine]
“It’s fair to say the majority of the growth is due to AI,” including “its heavy investment in generative AI and partnership with OpenAI,” said Shaolei Ren, [emphasis mine] a researcher at the University of California, Riverside who has been trying to calculate the environmental impact of generative AI products such as ChatGPT.
…
If you have the time, do read the O’Brien and Fingerhut article in it entirety. (Later in this post, I have a citation for and a link to a paper by Ren.)
Jason Clayworth’s September 18, 2023 article for AXIOS describes the issue from the Iowan perspective, Note: Links have been removed,
Future data center projects in West Des Moines will only be considered if Microsoft can implement technology that can “significantly reduce peak water usage,” the Associated Press reports.
Why it matters: Microsoft’s five WDM data centers — the “epicenter for advancing AI” — represent more than $5 billion in investments in the last 15 years.
Yes, but: They consumed as much as 11.5 million gallons of water a month for cooling, or about 6% of WDM’s total usage during peak summer usage during the last two years, according to information from West Des Moines Water Works.
…
This information becomes more intriguing (and disturbing) after reading a February 10, 2023 article for the World Economic Forum titled ‘This is why we can’t dismiss water scarcity in the US‘ by James Rees and/or an August 11, 2020 article ‘Why is America running out of water?‘ by Jon Heggie published by the National Geographic, which is a piece of paid content. Note: Despite the fact that it’s sponsored by Finish Dish Detergent, the research in Heggie’s article looks solid.
From Heggie’s article, Note: Links have been removed,
In March 2019, storm clouds rolled across Oklahoma; rain swept down the gutters of New York; hail pummeled northern Florida; floodwaters forced evacuations in Missouri; and a blizzard brought travel to a stop in South Dakota. Across much of America, it can be easy to assume that we have more than enough water. But that same a month, as storms battered the country, a government-backed report issued a stark warning: America is running out of water.
…
As the U.S. water supply decreases, demand is set to increase. On average, each American uses 80 to 100 gallons of water every day, with the nation’s estimated total daily usage topping 345 billion gallons—enough to sink the state of Rhode Island under a foot of water. By 2100 the U.S. population will have increased by nearly 200 million, with a total population of some 514 million people. Given that we use water for everything, the simple math is that more people mean more water stress across the country.
And we are already tapping into our reserves. Aquifers, porous rocks and sediment that store vast volumes of water underground, are being drained. Nearly 165 million Americans rely on groundwater for drinking water, farmers use it for irrigation―37 percent of our total water usage is for agriculture—and industry needs it for manufacturing. Groundwater is being pumped faster than it can be naturally replenished. The Central Valley Aquifer in California underlies one of the nation’s most agriculturally productive regions, but it is in drastic decline and has lost about ten cubic miles of water in just four years.
Decreasing supply and increasing demand are creating a perfect water storm, the effects of which are already being felt. The Colorado River carved its way 1,450 miles from the Rockies to the Gulf of California for millions of years, but now no longer reaches the sea. In 2018, parts of the Rio Grande recorded their lowest water levels ever; Arizona essentially lives under permanent drought conditions; and in South Florida’s freshwater aquifers are increasingly susceptible to salt water intrusion due to over-extraction.
…
The focus is on individual use of water and Heggie ends his article by suggesting we use less,
… And every American can save more water at home in multiple ways, from taking shorter showers to not rinsing dishes under a running faucet before loading them into a dishwasher, a practice that wastes around 20 gallons of water for each load. …
As an advertising pitch goes, this is fairly subtle as there’s no branding in the article itself and it is almost wholly informational.
Attempts to stave off water shortages as noted in Heggie’s and other articles include groundwater pumping both for individual use and industrial use. This practice has had an unexpected impact according to a June 16, 2023 article by Warren Cornwall for Science (magazine),
While spinning on its axis, Earth wobbles like an off-kilter top. Sloshing molten iron in Earth’s core, melting ice, ocean currents, and even hurricanes can all cause the poles to wander. Now, scientists have found that a significant amount of the polar drift results from human activity: pumping groundwater for drinking and irrigation.
“The very way the planet wobbles is impacted by our activities,” says Surendra Adhikari, a geophysicist at NASA’s Jet Propulsion Laboratory and an expert on Earth’s rotation who was not involved in the study. “It is, in a way, mind boggling.”
…
Clark R. Wilson, a geophysicist at the University of Texas at Austin, and his colleagues thought the removal of tens of gigatons of groundwater each year might affect the drift. But they knew it could not be the only factor. “There’s a lot of pieces that go into the final budget for causing polar drift,” Wilson says.
The scientists built a model of the polar wander, accounting for factors such as reservoirs filling because of new dams and ice sheets melting, to see how well they explained the polar movements observed between 1993 and 2010. During that time, satellite measurements were precise enough to detect a shift in the poles as small as a few millimeters.
Dams and ice changes were not enough to match the observed polar motion. But when the researchers also put in 2150 gigatons of groundwater that hydrologic models estimate were pumped between 1993 and 2010, the predicted polar motion aligned much more closely with observations. Wilson and his colleagues conclude that the redistribution of that water weight to the world’s oceans has caused Earth’s poles to shift nearly 80 centimeters during that time. In fact, groundwater removal appears to have played a bigger role in that period than the release of meltwater from ice in either Greenland or Antarctica, the scientists reported Thursday [June 15, 2023] in Geophysical Research Letters.
…
The new paper helps confirm that groundwater depletion added approximately 6 millimeters to global sea level rise between 1993 and 2010. “I was very happy” that this new method matched other estimates, Seo [Ki-Weon Seo geophysicist at Seoul National University and the study’s lead author] says. Because detailed astronomical measurements of the polar axis location go back to the end of the 19th century, polar drift could enable Seo to trace the human impact on the planet’s water over the past century.
Two papers: environmental impact from AI and groundwater pumping wobbles poles
I have two links and citations for Ren’s paper on AI and its environmental impact,
Towards Environmentally Equitable AI via Geographical Load Balancing by Pengfei Li, Jianyi Yang, Adam Wierman, Shaolei Ren. Subjects: Artificial Intelligence (cs.AI); Computers and Society (cs.CY) Cite as: arXiv:2307.05494 [cs.AI] (or arXiv:2307.05494v1 [cs.AI] for this version) DOI: https://doi.org/10.48550/arXiv.2307.05494 Submitted June 20, 2023
Augmented Self: Can Generative AI be more than just a tool? Sept. 20 [2023] 6:00-8:00 @Fields
This event is a collaboration between ArtSci Salon and the Quantified Self Meet up Group led by Eric Boyd. Join us for a thought-provoking exploration into the world of “Augmented Self: Can Generative AI be more than just a tool?”.
While the era of the Quantified Self isn’t over, new tools have emerged which make the idea of JUST quantifying yourself (for personal growth or insight) seem outdated. The widespread assumptions is that ChatGPT and other Generative AI tools can do at least some of your thinking FOR YOU. Similarly, MidJourney can churn out passable images from just a prompt (that ChatGPT wrote for you), even if you aren’t an artist. This ability has raised many red flags and concerns regarding intellectual property and copyright infringement. And hundreds more such tools are arriving like a tsunami as venture capitalists pour billions into Generative AI startups. How do we navigate Generative AI for personal growth and creativity? What are its ethical uses? How do we use it for personal growth and creativity, for education or accessibility? What is it’s impact on our sense of self and on the conditions of our employment?
Event Schedule:
6:00-6:30pm. Reception and Networking
6:30-7:15pm. Panel Discussion (see below)
7:15-7:45pm. Q&A with the audience
7:45-8:00pm. Networking
8pm – option – retiring to a nearby pub for discussions
Panel Discussion:
Engage with a diverse panel of experts, each offering a nuanced perspective on the integration of AI into personal development:
Techie Viewpoint:Eric Boyd, will talk in general about the “Augmented Self” idea, and relate his experiences working with these tools on an unusual creative project – a solarpunk tarot deck. It’s a gigantic project, and “orchestrating artificial cognition” is the weird “augmented” experience at the heart of it.
Artist Viewpoint:Ryan Kelln, a software artist, has been using text-to-image tools to explore remixing, appropriation, and representation in his latest concert (https://www.ryankelln.com/project/transmigrations/). His exploration didn’t answer all his questions but left him changed for the better.
Other Viewpoints: Seeking project show & tell, brief opinions and constructive criticism!
This event will be recorded. If you wish to join us on Zoom, please, head to the Facebook event page here a few days before the event to get the link.
Audience Participation: We invite your participation! If you’d like to speak on the panel, we are still looking to flesh it out. Ideally we’re looking for an educator who is grappling seriously with the impact of e.g. ChatGPT on their students and the process and goals of education in general. And we’re open to other ideas and viewpoints! Please contact the organizer (Eric Boyd) via meetup message with a brief description of your background and what you might share/say in 5+ minutes. It doesn’t need to be formal, these are the frontiers!
And everyone, please bring your curiosity and your questions! We welcome all input, especially critical or out-of-frame input. We don’t even know what kind of language we should be using to discuss this!
If you are intrigued by the intersection of technology, self-improvement, and personal expression and seek a nuanced perspective on the augmented self, this event is designed for you.
Join us for an evening of generative AI collaboration stories (in the usual manner of QS “what did you do”), candid exploration, and thought-provoking dialogue. Chart your course through the potential and complexities of the Augmented Self with the guidance of insightful experts and a community of like-minded explorers.
This event description began from a series of prompts to ChatGPT. Can you spot the unedited sections? Does it matter if you can or can’t? It feels very new and different to make things this way. Let’s talk about it. see full description by organizer Eric Boyd.
“Migrations Without Borders” is a modular piece of art that explores the potential of AI to mimic and remix cultural styles and elements [emphasis mine]. Incorporating eight distinct musical styles and corresponding visual elements, the piece allows for the dynamic composition of linked music themes and visuals.
But “Migrations” is more than just a showcase of AI’s abilities. It is a deliberate mixture of themes, including immigration, remix culture, AI bias, and the interplay of language and imagery. Drawing from Dhaivat’s personal experience and Toronto’s diverse cultural landscape, the piece creates a universe of cross-pollination that encourages reflection on the ways in which technology is changing our relationship to culture, identity, and acceptable thought.
The art invites us to consider the consequences of AI’s powers of mimicry and integration. What does it mean for likenesses and cultures to collide and mix so easily? How do we navigate the borrowing of styles and representations that may not be our own? What responsibilities and freedoms do we have in this rapidly evolving landscape?
…
North Vancouver
I wouldn’t ordinarily post about an art exhibition closing or finale event but this it a good companion event in Toronto and gives people in the Vancouver area an opportunity for something that’s more avant garde than I realized when the exhibition was announced in May 2023,, from the Phase Shifting Index Closing Celebration event page on the Polygon Art Gallery website,
Jeremy Shaw: Phase Shifting Index
Closing Celebration
Sunday, September 24 5:00pm
[Location: The Polygon Gallery at 101 Carrie Cates Court in North Vancouver, BC, Canada]
Artist in attendance
Final day to see Phase Shifting Index—for the full experience of the seven-channel work please come at least 35 minutes before the exhibition closes at 5:00 pm.
Doors at 5:00pm Screening of Jeremy Shaw’s short film Quickeners at 5:15pm Conversation between Jeremy Shaw and The Polygon’s Audain Chief Curator Monika Szewczyk at 5:45pm Reception at 6:15pm
About Quickeners Quickeners: They live about 500 years after us and belong to the entirely rational- thinking species of Quantum Human, who are immortal and connected to each other through an abstract entity called “The Hive”. However, Quickeners have a developed a rare disorder named “Human Atavism Syndrome” – or H.A.S.- that prompts them to unexplainably desire to engage in long-forgotten behavioural patterns of humans. Detached from Hive, the Quickeners fall into an ecstatic state in which they sing, clap, cry, scream, dance and handle poisonous snakes [emphasis mine].
About Phase Shifting Index Through a seven-channel video, sound, and light installation—the most ambitious use to date of Jeremy Shaw’s signature, evolving ‘post-documentary’ approach—visitors experience seven distinct subcultures that believe they can fundamentally alter reality.
About Jeremy Shaw Born in North Vancouver and now based in Berlin, Jeremy Shaw works in a variety of media to explore altered states and the cultural and scientific practices that aspire to map transcendental experience. His films, installations and sculptures have gained worldwide acclaim with solo exhibitions at Centre Pompidou, Paris, MoMA PS1, New York, and Schinkel Pavillon, Berlin as well international surveys including the 57th Venice Biennale, 16th Lyon Biennale and Manifesta 11, Zurich.
For anyone who does decide on the full experience, here’s more about Phase Shifting Index from the May 17, 2023 Polygon news release,
From June 23 to Sept. 24, 2023, The Polygon Gallery presents the North American premiere of Phase Shifting Index by North Vancouver-born, Berlin-based artist Jeremy Shaw. The immersive installation combines film, sound, and light to tell a story about an imagined future in which human beliefs and survival are at stake.
…
Phase Shifting Index is a seven-channel video, sound, and light installation that functions as a science-fiction pseudo-documentary about seven distinct subcultures that believe they can fundamentally alter reality. Each screen shows a group engaging in ritualistic movements while dressed in clothing that places them in periods ranging from the 1960s to the 1990s. Shaw uses outdated modes of 20th-century video technology (such as 16mm film and Hi-8 video tape), while interviews in indecipherable languages are subtitled in English. All seven channels are tied together by an overarching narrator who describes their belief systems and the significance of their movements: body-mind centering, robotic popping-and-locking, modern and postmodern dance, jump-style, hardcore punk skanking, and trust exercises, amongst others.
As the work progresses, the audiovisual elements of each screen draws the viewer into a dramatic narrative arc. At the climax, the seven autonomous subcultural groups align in a trans-temporal dance routine, with all subjects on all screens engaged in the same cathartic, synchronized movements, before disintegrating into abstraction and chaos. Sounds and sights collide on screen and then meld into a synaptic colour field. The result is a suspension of time and space, as the seven parallel realities fuse into one psychedelic art installation.
…
It was the ‘psychedelic’ in the last line along with references to the 1960s that dampened my enthusiasm for this ‘mind blowing’ experience. However, Ryan Kelln’s Transmigrations and proposed talk at Art Science Salon/Quantified Self Toronto’s event “Augmented Self: Can Generative AI be more than just a tool?” broadened my thinking on the matter.
I have two items on ChatGPT and academic cheating. The first (from April 2023) deals with the economic impact on people who make their living by writing the papers for the cheaters and the second (from May 2023) deals with unintended consequences for the cheaters (the students not the contract writers).
Making a living in Kenya
Martin K.N Siele’s April 21, 2023 article for restofworld.org (a website where you can find “Reporting [on] Global Tech Stories”) provides a perspective that’s unfamiliar to me, Note: Links have been removed,
For the past nine years, Collins, a 27-year-old freelance writer, has been making money by writing assignments for students in the U.S. — over 13,500 kilometers away from Nanyuki in central Kenya, where he lives. He is part of the “contract cheating” industry, known locally as simply “academic writing.” Collins writes college essays on topics including psychology, sociology, and economics. Occasionally, he is even granted direct access to college portals, allowing him to submit tests and assignments, participate in group discussions, and talk to professors using students’ identities. In 2022, he made between $900 and $1,200 a month from this work.
Lately, however, his earnings have dropped to $500–$800 a month. Collins links this to the meteoric rise of ChatGPT and other generative artificial intelligence tools.
“Last year at a time like this, I was getting, on average, 50 to 70 assignments, including discussions which are shorter, around 150 words each, and don’t require much research,” Collins told Rest of World. “Right now, on average, I get around 30 to 40-something assignments.” He requested to be identified only by his first name to avoid jeopardizing his accounts on platforms where he finds clients.
In January 2023, online learning platform Study surveyed more than 1,000 American students and over 100 educators. More than 89% of the students said they had used ChatGPT for help with a homework assignment. Nearly half admitted to using ChatGPT for an at-home test or quiz, 53% had used it to write an essay, and 22% had used it for outlining one.
Collins now fears that the rise of AI could significantly reduce students’ reliance on freelancers like him in the long term, affecting their income. Meanwhile, he depends on ChatGPT to generate the content he used to outsource to other freelance writers.
While 17 states in the U.S. have banned contract cheating, it has not been a problem for freelancers in Kenya, concerned about providing for themselves and their families. Despite being the largest economy in East Africa, Kenya has the region’s highest unemployment rate, with 5.7% of the labor force out of work in 2021. Around 25.8% of the population is estimated to live in extreme poverty. This situation makes the country a potent hub for freelance workers. According to the Online Labour Index (OLI), an economic indicator that measures the global online gig economy, Kenya accounts for 1% of the world’s online freelance workforce, ranking 15th overall and second only to Egypt in Africa. About 70% of online freelancers in Kenya offer writing and translation services.
…
Not everyone agrees with Collins with regard to the impact that AI such as ChatGPT is having on their ghostwriting bottom line but everyone agrees there’s an impact. If you have time, do read Siele’s April 21, 2023 article in its entirety.
The dark side of using contract writing services
This May 10, 2023 essay on The Conversation by Nathalie Wierdak (Teaching Fellow) and Lynnaire Sheridan (Senior lecturer), both at the University of Otago, takes a more standard perspective, initially (Note: Links have been removed; h/t phys.org May 11, 2023 news item),
Since the launch of ChatGPT in late 2022, academics have expressed concern over the impact the artificial intelligence service could have on student work.
But educational institutions trying to safeguard academic integrity could be looking in the wrong direction. Yes, ChatGPT raises questions about how to assess students’ learning. However, it should be less of a concern than the persistent and pervasive use of ghostwriting services.
Essentially, academic ghostwriting is when a student submits a piece of work as their own which is, in fact, written by someone else. Often dubbed “contract cheating,” the outsourcing of assessment to ghostwriters undermines student learning.
…
But contract cheating is increasingly commonplace as time-poor students juggle jobs to meet the soaring costs of education. And the internet creates the perfect breeding ground for willing ghostwriting entrepreneurs.
In New Zealand, 70-80% of tertiary students engage in some form of cheating. While most of this academic misconduct was collusion with peers or plagiarism, the emergence of artificial intelligence has been described as a battle academia will inevitably lose.
It is time a new approach is taken by universities.
Allowing the use of ChatGPT by students could help reduce the use of contract cheating by doing the heavy lifting of academic work while still giving students the opportunity to learn.
…
This essay seems to have been written as a counterpoint to Siele’s article. Here’s where the May 10, 2023 essay gets interesting,
Universities have been cracking down on ghost writing to ensure quality education, to protect their students from blackmail and to even prevent international espionage [emphasis mine].
Contract cheating websites store personal data making students unwittingly vulnerable to extortion to avoid exposure and potential expulsion from their institution, or the loss of their qualification.
Some researchers are warning there is an even greater risk – that private student data will fall into the hands of foreign state actors.
Preventing student engagement with contract cheating sites, or at least detecting students who use them, avoids the likelihood of graduates in critical job roles being targeted for nationally sensitive data.
…
Given the underworld associated with ghostwriting, artificial intelligence has the potential to bust the contract cheating economy. This would keep students safer by providing them with free, instant and accessible resources.
…
If you have time to read it in its entirety, there are other advantages to AI-enhanced learning mentioned in the May 10, 2023 essay.
Launched on Thursday, July 13, 2023 during UNESCO’s (United Nations Educational, Scientific, and Cultural Organization) “Global dialogue on the ethics of neurotechnology,” is a report tying together the usual measures of national scientific supremacy (number of papers published and number of patents filed) with information on corporate investment in the field. Consequently, “Unveiling the Neurotechnology Landscape: Scientific Advancements, Innovations and Major Trends” by Daniel S. Hain, Roman Jurowetzki, Mariagrazia Squicciarini, and Lihui Xu provides better insight into the international neurotechnology scene than is sometimes found in these kinds of reports. By the way, the report is open access.
Here’s what I mean, from the report‘s short summary,
…
Since 2013, government investments in this field have exceeded $6 billion. Private investment has also seen significant growth, with annual funding experiencing a 22-fold increase from 2010 to 2020, reaching $7.3 billion and totaling $33.2 billion.
This investment has translated into a 35-fold growth in neuroscience publications between 2000-2021 and 20-fold growth in innovations between 2022-2020, as proxied by patents. However, not all are poised to benefit from such developments, as big divides emerge.
Over 80% of high-impact neuroscience publications are produced by only ten countries, while 70% of countries contributed fewer than 10 such papers over the period considered. Similarly, five countries only hold 87% of IP5 neurotech patents.
This report sheds light on the neurotechnology ecosystem, that is, what is being developed, where and by whom, and informs about how neurotechnology interacts with other technological trajectories, especially Artificial Intelligence [emphasis mine]. [p. 2]
…
The money aspect is eye-opening even when you already have your suspicions. Also, it’s not entirely unexpected to learn that only ten countries produce over 80% of the high impact neurotech papers and that only five countries hold 87% of the IP5 neurotech patents but it is stunning to see it in context. (If you’re not familiar with the term ‘IP5 patents’, scroll down in this post to the relevant subhead. Hint: It means the patent was filed in one of the top five jurisdictions; I’ll leave you to guess which ones those might be.)
“Since 2013 …” isn’t quite as informative as the authors may have hoped. I wish they had given a time frame for government investments similar to what they did for corporate investments (e.g., 2010 – 2020). Also, is the $6B (likely in USD) government investment cumulative or an estimated annual number? To sum up, I would have appreciated parallel structure and specificity.
Nitpicks aside, there’s some very good material intended for policy makers. On that note, some of the analysis is beyond me. I haven’t used anything even somewhat close to their analytical tools in years and years. This commentaries reflects my interests and a very rapid reading. One last thing, this is being written from a Canadian perspective. With those caveats in mind, here’s some of what I found.
A definition, social issues, country statistics, and more
There’s a definition for neurotechnology and a second mention of artificial intelligence being used in concert with neurotechnology. From the report‘s executive summary,
Neurotechnology consists of devices and procedures used to access, monitor, investigate, assess, manipulate, and/or emulate the structure and function of the neural systems of animals or human beings. It is poised to revolutionize our understanding of the brain and to unlock innovative solutions to treat a wide range of diseases and disorders.
…
Similarly to Artificial Intelligence (AI), and also due to its convergence with AI, neurotechnology may have profound societal and economic impact, beyond the medical realm. As neurotechnology directly relates to the brain, it triggers ethical considerations about fundamental aspects of human existence, including mental integrity, human dignity, personal identity, freedom of thought, autonomy, and privacy [emphases mine]. Its potential for enhancement purposes and its accessibility further amplifies its prospect social and societal implications.
…
The recent discussions held at UNESCO’s Executive Board further shows Member States’ desire to address the ethics and governance of neurotechnology through the elaboration of a new standard-setting instrument on the ethics of neurotechnology, to be adopted in 2025. To this end, it is important to explore the neurotechnology landscape, delineate its boundaries, key players, and trends, and shed light on neurotech’s scientific and technological developments. [p. 7]
The present report addresses such a need for evidence in support of policy making in relation to neurotechnology by devising and implementing a novel methodology on data from scientific articles and patents:
● We detect topics over time and extract relevant keywords using a transformer- based language models fine-tuned for scientific text. Publication data for the period 2000-2021 are sourced from the Scopus database and encompass journal articles and conference proceedings in English. The 2,000 most cited publications per year are further used in in-depth content analysis. ● Keywords are identified through Named Entity Recognition and used to generate search queries for conducting a semantic search on patents’ titles and abstracts, using another language model developed for patent text. This allows us to identify patents associated with the identified neuroscience publications and their topics. The patent data used in the present analysis are sourced from the European Patent Office’s Worldwide Patent Statistical Database (PATSTAT). We consider IP5 patents filed between 2000-2020 having an English language abstract and exclude patents solely related to pharmaceuticals.
This approach allows mapping the advancements detailed in scientific literature to the technological applications contained in patent applications, allowing for an analysis of the linkages between science and technology. This almost fully automated novel approach allows repeating the analysis as neurotechnology evolves. [pp. 8-9[
Findings in bullet points,
Key stylized facts are: ● The field of neuroscience has witnessed a remarkable surge in the overall number of publications since 2000, exhibiting a nearly 35-fold increase over the period considered, reaching 1.2 million in 2021. The annual number of publications in neuroscience has nearly tripled since 2000, exceeding 90,000 publications a year in 2021. This increase became even more pronounced since 2019. ● The United States leads in terms of neuroscience publication output (40%), followed by the United Kingdom (9%), Germany (7%), China (5%), Canada (4%), Japan (4%), Italy (4%), France (4%), the Netherlands (3%), and Australia (3%). These countries account for over 80% of neuroscience publications from 2000 to 2021. ● Big divides emerge, with 70% of countries in the world having less than 10 high- impact neuroscience publications between 2000 to 2021. ● Specific neurotechnology-related research trends between 2000 and 2021 include: ○ An increase in Brain-Computer Interface (BCI) research around 2010, maintaining a consistent presence ever since. ○ A significant surge in Epilepsy Detection research in 2017 and 2018, reflecting the increased use of AI and machine learning in healthcare. ○ Consistent interest in Neuroimaging Analysis, which peaks around 2004, likely because of its importance in brain activity and language comprehension studies. ○ While peaking in 2016 and 2017, Deep Brain Stimulation (DBS) remains a persistent area of research, underlining its potential in treating conditions like Parkinson’s disease and essential tremor. ● Between 2000 and 2020, the total number of patent applications in this field increased significantly, experiencing a 20-fold increase from less than 500 to over 12,000. In terms of annual figures, a consistent upward trend in neurotechnology-10 related patent applications emerges, with a notable doubling observed between 2015 and 2020. • The United States account for nearly half of all worldwide patent applications (47%). Other major contributors include South Korea (11%), China (10%), Japan (7%), Germany (7%), and France (5%). These five countries together account for 87% of IP5 neurotech patents applied between 2000 and 2020. ○ The United States has historically led the field, with a peak around 2010, a decline towards 2015, and a recovery up to 2020. ○ South Korea emerged as a significant contributor after 1990, overtaking Germany in the late 2000s to become the second-largest developer of neurotechnology. By the late 2010s, South Korea’s annual neurotechnology patent applications approximated those of the United States. ○ China exhibits a sharp increase in neurotechnology patent applications in the mid-2010s, bringing it on par with the United States in terms of application numbers. ● The United States ranks highest in both scientific publications and patents, indicating their strong ability to transform knowledge into marketable inventions. China, France, and Korea excel in leveraging knowledge to develop patented innovations. Conversely, countries such as the United Kingdom, Germany, Italy, Canada, Brazil, and Australia lag behind in effectively translating neurotech knowledge into patentable innovations. ● In terms of patent quality measured by forward citations, the leading countries are Germany, US, China, Japan, and Korea. ● A breakdown of patents by technology field reveals that Computer Technology is the most important field in neurotechnology, exceeding Medical Technology, Biotechnology, and Pharmaceuticals. The growing importance of algorithmic applications, including neural computing techniques, also emerges by looking at the increase in patent applications in these fields between 2015-2020. Compared to the reference year, computer technologies-related patents in neurotech increased by 355% and by 92% in medical technology. ● An analysis of the specialization patterns of the top-5 countries developing neurotechnologies reveals that Germany has been specializing in chemistry- related technology fields, whereas Asian countries, particularly South Korea and China, focus on computer science and electrical engineering-related fields. The United States exhibits a balanced configuration with specializations in both chemistry and computer science-related fields. ● The entities – i.e. both companies and other institutions – leading worldwide innovation in the neurotech space are: IBM (126 IP5 patents, US), Ping An Technology (105 IP5 patents, CH), Fujitsu (78 IP5 patents, JP), Microsoft (76 IP511 patents, US)1, Samsung (72 IP5 patents, KR), Sony (69 IP5 patents JP) and Intel (64 IP5 patents US)
This report further proposes a pioneering taxonomy of neurotechnologies based on International Patent Classification (IPC) codes.
• 67 distinct patent clusters in neurotechnology are identified, which mirror the diverse research and development landscape of the field. The 20 most prominent neurotechnology groups, particularly in areas like multimodal neuromodulation, seizure prediction, neuromorphic computing [emphasis mine], and brain-computer interfaces, point to potential strategic areas for research and commercialization. • The variety of patent clusters identified mirrors the breadth of neurotechnology’s potential applications, from medical imaging and limb rehabilitation to sleep optimization and assistive exoskeletons. • The development of a baseline IPC-based taxonomy for neurotechnology offers a structured framework that enriches our understanding of this technological space, and can facilitate research, development and analysis. The identified key groups mirror the interdisciplinary nature of neurotechnology and underscores the potential impact of neurotechnology, not only in healthcare but also in areas like information technology and biomaterials, with non-negligible effects over societies and economies.
1 If we consider Microsoft Technology Licensing LLM and Microsoft Corporation as being under the same umbrella, Microsoft leads worldwide developments with 127 IP5 patents. Similarly, if we were to consider that Siemens AG and Siemens Healthcare GmbH belong to the same conglomerate, Siemens would appear much higher in the ranking, in third position, with 84 IP5 patents. The distribution of intellectual property assets across companies belonging to the same conglomerate is frequent and mirrors strategic as well as operational needs and features, among others. [pp. 9-11]
Surprises and comments
Interesting and helpful to learn that “neurotechnology interacts with other technological trajectories, especially Artificial Intelligence;” this has changed and improved my understanding of neurotechnology.
It was unexpected to find Canada in the top ten countries producing neuroscience papers. However, finding out that the country lags in translating its ‘neuro’ knowledge into patentable innovation is not entirely a surprise.
It can’t be an accident that countries with major ‘electronics and computing’ companies lead in patents. These companies do have researchers but they also buy startups to acquire patents. They (and ‘patent trolls’) will also file patents preemptively. For the patent trolls, it’s a moneymaking proposition and for the large companies, it’s a way of protecting their own interests and/or (I imagine) forcing a sale.
The mention of neuromorphic (brainlike) computing in the taxonomy section was surprising and puzzling. Up to this point, I’ve thought of neuromorphic computing as a kind of alternative or addition to standard computing but the authors have blurred the lines as per UNESCO’s definition of neurotechnology (specifically, “… emulate the structure and function of the neural systems of animals or human beings”) . Again, this report is broadening my understanding of neurotechnology. Of course, it required two instances before I quite grasped it, the definition and the taxonomy.
What’s puzzling is that neuromorphic engineering, a broader term that includes neuromorphic computing, isn’t used or mentioned. (For an explanation of the terms neuromorphic computing and neuromorphic engineering, there’s my June 23, 2023 posting, “Neuromorphic engineering: an overview.” )
The report
I won’t have time for everything. Here are some of the highlights from my admittedly personal perspective.
Neurotechnology’s applications however extend well beyond medicine [emphasis mine], and span from research, to education, to the workplace, and even people’s everyday life. Neurotechnology-based solutions may enhance learning and skill acquisition and boost focus through brain stimulation techniques. For instance, early research finds that brain- zapping caps appear to boost memory for at least one month (Berkeley, 2022). This could one day be used at home to enhance memory functions [emphasis mine]. They can further enable new ways to interact with the many digital devices we use in everyday life, transforming the way we work, live and interact. One example is the Sound Awareness wristband developed by a Stanford team (Neosensory, 2022) which enables individuals to “hear” by converting sound into tactile feedback, so that sound impaired individuals can perceive spoken words through their skin. Takagi and Nishimoto (2023) analyzed the brain scans taken through Magnetic Resonance Imaging (MRI) as individuals were shown thousands of images. They then trained a generative AI tool called Stable Diffusion2 on the brain scan data of the study’s participants, thus creating images that roughly corresponded to the real images shown. While this does not correspond to reading the mind of people, at least not yet, and some limitations of the study have been highlighted (Parshall, 2023), it nevertheless represents an important step towards developing the capability to interface human thoughts with computers [emphasis mine], via brain data interpretation.
While the above examples may sound somewhat like science fiction, the recent uptake of generative Artificial Intelligence applications and of large language models such as ChatGPT or Bard, demonstrates that the seemingly impossible can quickly become an everyday reality. At present, anyone can purchase online electroencephalogram (EEG) devices for a few hundred dollars [emphasis mine], to measure the electrical activity of their brain for meditation, gaming, or other purposes. [pp. 14-15]
This is very impressive achievement. Some of the research cited was published earlier this year (2023). The extraordinary speed is a testament to the efforts by the authors and their teams. It’s also a testament to how quickly the field is moving.
I’m glad to see the mention of and focus on consumer neurotechnology. (While the authors don’t speculate, I am free to do so.) Consumer neurotechnology could be viewed as one of the steps toward normalizing a cyborg future for all of us. Yes, we have books, television programmes, movies, and video games, which all normalize the idea but the people depicted have been severely injured and require the augmentation. With consumer neurotechnology, you have easily accessible devices being used to enhance people who aren’t injured, they just want to be ‘better’.
This phrase seemed particularly striking “… an important step towards developing the capability to interface human thoughts with computers” in light of some claims made by the Australian military in my June 13, 2023 posting “Mind-controlled robots based on graphene: an Australian research story.” (My posting has an embedded video demonstrating the Brain Robotic Interface (BRI) in action. Also, see the paragraph below the video for my ‘measured’ response.)
There’s no mention of the military in the report which seems more like a deliberate rather than inadvertent omission given the importance of military innovation where technology is concerned.
This section gives a good overview of government initiatives (in the report it’s followed by a table of the programmes),
Thanks to the promises it holds, neurotechnology has garnered significant attention from both governments and the private sector and is considered by many as an investment priority. According to the International Brain Initiative (IBI), brain research funding has become increasingly important over the past ten years, leading to a rise in large-scale state-led programs aimed at advancing brain intervention technologies(International Brain Initiative, 2021). Since 2013, initiatives such as the United States’ Brain Research Through Advancing Innovative Neurotechnologies (BRAIN) Initiative and the European Union’s Human Brain Project (HBP), as well as major national initiatives in China, Japan and South Korea have been launched with significant funding support from the respective governments. The Canadian Brain Research Strategy, initially operated as a multi- stakeholder coalition on brain research, is also actively seeking funding support from the government to transform itself into a national research initiative (Canadian Brain Research Strategy, 2022). A similar proposal is also seen in the case of the Australian Brain Alliance, calling for the establishment of an Australian Brain Initiative (Australian Academy of Science, n.d.). [pp. 15-16]
Privacy
There are some concerns such as these,
Beyond the medical realm, research suggests that emotional responses of consumers related to preferences and risks can be concurrently tracked by neurotechnology, such as neuroimaging and that neural data can better predict market-level outcomes than traditional behavioral data (Karmarkar and Yoon, 2016). As such, neural data is increasingly sought after in the consumer market for purposes such as digital phenotyping4, neurogaming 5,and neuromarketing6 (UNESCO, 2021). This surge in demand gives rise to risks like hacking, unauthorized data reuse, extraction of privacy-sensitive information, digital surveillance, criminal exploitation of data, and other forms of abuse. These risks prompt the question of whether neural data needs distinct definition and safeguarding measures.
These issues are particularly relevant today as a wide range of electroencephalogram (EEG) headsets that can be used at home are now available in consumer markets for purposes that range from meditation assistance to controlling electronic devices through the mind. Imagine an individual is using one of these devices to play a neurofeedback game, which records the person’s brain waves during the game. Without the person being aware, the system can also identify the patterns associated with an undiagnosed mental health condition, such as anxiety. If the game company sells this data to third parties, e.g. health insurance providers, this may lead to an increase of insurance fees based on undisclosed information. This hypothetical situation would represent a clear violation of mental privacy and of unethical use of neural data.
Another example is in the field of advertising, where companies are increasingly interested in using neuroimaging to better understand consumers’ responses to their products or advertisements, a practice known as neuromarketing. For instance, a company might use neural data to determine which advertisements elicit the most positive emotional responses in consumers. While this can help companies improve their marketing strategies, it raises significant concerns about mental privacy. Questions arise in relation to consumers being aware or not that their neural data is being used, and in the extent to which this can lead to manipulative advertising practices that unfairly exploit unconscious preferences. Such potential abuses underscore the need for explicit consent and rigorous data protection measures in the use of neurotechnology for neuromarketing purposes. [pp. 21-22]
Legalities
Some countries already have laws and regulations regarding neurotechnology data,
At the national level, only a few countries have enacted laws and regulations to protect mental integrity or have included neuro-data in personal data protection laws (UNESCO, University of Milan-Bicocca (Italy) and State University of New York – Downstate Health Sciences University, 2023). Examples are the constitutional reform undertaken by Chile (Republic of Chile, 2021), the Charter for the responsible development of neurotechnologies of the Government of France (Government of France, 2022), and the Digital Rights Charter of the Government of Spain (Government of Spain, 2021). They propose different approaches to the regulation and protection of human rights in relation to neurotechnology. Countries such as the UK are also examining under which circumstances neural data may be considered as a special category of data under the general data protection framework (i.e. UK’s GDPR) (UK’s Information Commissioner’s Office, 2023) [p. 24]
As you can see, these are recent laws. There doesn’t seem to be any attempt here in Canada even though there is an act being reviewed in Parliament that could conceivably include neural data. This is from my May 1, 2023 posting,
Bill C-27 (Digital Charter Implementation Act, 2022) is what I believe is called an omnibus bill as it includes three different pieces of proposed legislation (the Consumer Privacy Protection Act [CPPA], the Artificial Intelligence and Data Act [AIDA], and the Personal Information and Data Protection Tribunal Act [PIDPTA]). [emphasis added July 11, 2023] You can read the Innovation, Science and Economic Development (ISED) Canada summary here or a detailed series of descriptions of the act here on the ISED’s Canada’s Digital Charter webpage.
My focus at the time was artificial intelligence and, now, after reading this UNESCO report and briefly looking at the Innovation, Science and Economic Development (ISED) Canada summary and a detailed series of descriptions of the act on ISED’s Canada’s Digital Charter webpage, I don’t see anything that specifies neural data but it’s not excluded either.
IP5 patents
Here’s the explanation (the footnote is included at the end of the excerpt),
IP5 patents represent a subset of overall patents filed worldwide, which have the characteristic of having been filed in at least one top intellectual property offices (IPO) worldwide (the so called IP5, namely the Chinese National Intellectual Property Administration, CNIPA (formerly SIPO); the European Patent Office, EPO; the Japan Patent Office, JPO; the Korean Intellectual Property Office, KIPO; and the United States Patent and Trademark Office, USPTO) as well as another country, which may or may not be an IP5. This signals their potential applicability worldwide, as their inventiveness and industrial viability have been validated by at least two leading IPOs. This gives these patents a sort of “quality” check, also since patenting inventions is costly and if applicants try to protect the same invention in several parts of the world, this normally mirrors that the applicant has expectations about their importance and expected value. If we were to conduct the same analysis using information about individually considered patent applied worldwide, i.e. without filtering for quality nor considering patent families, we would risk conducting a biased analysis based on duplicated data. Also, as patentability standards vary across countries and IPOs, and what matters for patentability is the existence (or not) of prior art in the IPO considered, we would risk mixing real innovations with patents related to catching up phenomena in countries that are not at the forefront of the technology considered.
9 The five IP offices (IP5) is a forum of the five largest intellectual property offices in the world that was set up to improve the efficiency of the examination process for patents worldwide. The IP5 Offices together handle about 80% of the world’s patent applications, and 95% of all work carried out under the Patent Cooperation Treaty (PCT), see http://www.fiveipoffices.org. (Dernis et al., 2015) [p. 31]
AI assistance on this report
As noted earlier I have next to no experience with the analytical tools having not attempted this kind of work in several years. Here’s an example of what they were doing,
We utilize a combination of text embeddings based on Bidirectional Encoder Representations from Transformer (BERT), dimensionality reduction, and hierarchical clustering inspired by the BERTopic methodology 12 to identify latent themes within research literature. Latent themes or topics in the context of topic modeling represent clusters of words that frequently appear together within a collection of documents (Blei, 2012). These groupings are not explicitly labeled but are inferred through computational analysis examining patterns in word usage. These themes are ‘hidden’ within the text, only to be revealed through this analysis. …
…
We further utilize OpenAI’s GPT-4 model to enrich our understanding of topics’ keywords and to generate topic labels (OpenAI, 2023), thus supplementing expert review of the broad interdisciplinary corpus. Recently, GPT-4 has shown impressive results in medical contexts across various evaluations (Nori et al., 2023), making it a useful tool to enhance the information obtained from prior analysis stages, and to complement them. The automated process enhances the evaluation workflow, effectively emphasizing neuroscience themes pertinent to potential neurotechnology patents. Notwithstanding existing concerns about hallucinations (Lee, Bubeck and Petro, 2023) and errors in generative AI models, this methodology employs the GPT-4 model for summarization and interpretation tasks, which significantly mitigates the likelihood of hallucinations. Since the model is constrained to the context provided by the keyword collections, it limits the potential for fabricating information outside of the specified boundaries, thereby enhancing the accuracy and reliability of the output. [pp. 33-34]
I couldn’t resist adding the ChatGPT paragraph given all of the recent hoopla about it.
Multimodal neuromodulation and neuromorphic computing patents
I think this gives a pretty good indication of the activity on the patent front,
The largest, coherent topic, termed “multimodal neuromodulation,” comprises 535 patents detailing methodologies for deep or superficial brain stimulation designed to address neurological and psychiatric ailments. These patented technologies interact with various points in neural circuits to induce either Long-Term Potentiation (LTP) or Long-Term Depression (LTD), offering treatment for conditions such as obsession, compulsion, anxiety, depression, Parkinson’s disease, and other movement disorders. The modalities encompass implanted deep-brain stimulators (DBS), Transcranial Magnetic Stimulation (TMS), and transcranial Direct Current Stimulation (tDCS). Among the most representative documents for this cluster are patents with titles: Electrical stimulation of structures within the brain or Systems and methods for enhancing or optimizing neural stimulation therapy for treating symptoms of Parkinson’s disease and or other movement disorders. [p.65]
Given my longstanding interest in memristors, which (I believe) have to a large extent helped to stimulate research into neuromorphic computing, this had to be included. Then, there was the brain-computer interfaces cluster,
A cluster identified as “Neuromorphic Computing” consists of 366 patents primarily focused on devices designed to mimic human neural networks for efficient and adaptable computation. The principal elements of these inventions are resistive memory cells and artificial synapses. They exhibit properties similar to the neurons and synapses in biological brains, thus granting these devices the ability to learn and modulate responses based on rewards, akin to the adaptive cognitive capabilities of the human brain.
The primary technology classes associated with these patents fall under specific IPC codes, representing the fields of neural network models, analog computers, and static storage structures. Essentially, these classifications correspond to technologies that are key to the construction of computers and exhibit cognitive functions similar to human brain processes.
Examples for this cluster include neuromorphic processing devices that leverage variations in resistance to store and process information, artificial synapses exhibiting spike-timing dependent plasticity, and systems that allow event-driven learning and reward modulation within neuromorphic computers.
In relation to neurotechnology as a whole, the “neuromorphic computing” cluster holds significant importance. It embodies the fusion of neuroscience and technology, thereby laying the basis for the development of adaptive and cognitive computational systems. Understanding this specific cluster provides a valuable insight into the progressing domain of neurotechnology, promising potential advancements across diverse fields, including artificial intelligence and healthcare.
The “Brain-Computer Interfaces” cluster, consisting of 146 patents, embodies a key aspect of neurotechnology that focuses on improving the interface between the brain and external devices. The technology classification codes associated with these patents primarily refer to methods or devices for treatment or protection of eyes and ears, devices for introducing media into, or onto, the body, and electric communication techniques, which are foundational elements of brain-computer interface (BCI) technologies.
Key patents within this cluster include a brain-computer interface apparatus adaptable to use environment and method of operating thereof, a double closed circuit brain-machine interface system, and an apparatus and method of brain-computer interface for device controlling based on brain signal. These inventions mainly revolve around the concept of using brain signals to control external devices, such as robotic arms, and improving the classification performance of these interfaces, even after long periods of non-use.
The inventions described in these patents improve the accuracy of device control, maintain performance over time, and accommodate multiple commands, thus significantly enhancing the functionality of BCIs.
Other identified technologies include systems for medical image analysis, limb rehabilitation, tinnitus treatment, sleep optimization, assistive exoskeletons, and advanced imaging techniques, among others. [pp. 66-67]
Having sections on neuromorphic computing and brain-computer interface patents in immediate proximity led to more speculation on my part. Imagine how much easier it would be to initiate a BCI connection if it’s powered with a neuromorphic (brainlike) computer/device. [ETA July 21, 2023: Following on from that thought, it might be more than just easier to initiate a BCI connection. Could a brainlike computer become part of your brain? Why not? it’s been successfully argued that a robotic wheelchair was part of someone’s body, see my January 30, 2013 posting and scroll down about 40% of the way.)]
Neurotechnology is a complex and rapidly evolving technological paradigm whose trajectories have the power to shape people’s identity, autonomy, privacy, sentiments, behaviors and overall well-being, i.e. the very essence of what it means to be human.
Designing and implementing careful and effective norms and regulations ensuring that neurotechnology is developed and deployed in an ethical manner, for the good of individuals and for society as a whole, call for a careful identification and characterization of the issues at stake. This entails shedding light on the whole neurotechnology ecosystem, that is what is being developed, where and by whom, and also understanding how neurotechnology interacts with other developments and technological trajectories, especially AI. Failing to do so may result in ineffective (at best) or distorted policies and policy decisions, which may harm human rights and human dignity.
…
Addressing the need for evidence in support of policy making, the present report offers first time robust data and analysis shedding light on the neurotechnology landscape worldwide. To this end, its proposes and implements an innovative approach that leverages artificial intelligence and deep learning on data from scientific publications and paten[t]s to identify scientific and technological developments in the neurotech space. The methodology proposed represents a scientific advance in itself, as it constitutes a quasi- automated replicable strategy for the detection and documentation of neurotechnology- related breakthroughs in science and innovation, to be repeated over time to account for the evolution of the sector. Leveraging this approach, the report further proposes an IPC-based taxonomy for neurotechnology which allows for a structured framework to the exploration of neurotechnology, to enable future research, development and analysis. The innovative methodology proposed is very flexible and can in fact be leveraged to investigate different emerging technologies, as they arise.
…
In terms of technological trajectories, we uncover a shift in the neurotechnology industry, with greater emphasis being put on computer and medical technologies in recent years, compared to traditionally dominant trajectories related to biotechnology and pharmaceuticals. This shift warrants close attention from policymakers, and calls for attention in relation to the latest (converging) developments in the field, especially AI and related methods and applications and neurotechnology.
This is all the more important and the observed growth and specialization patterns are unfolding in the context of regulatory environments that, generally, are either not existent or not fit for purpose. Given the sheer implications and impact of neurotechnology on the very essence of human beings, this lack of regulation poses key challenges related to the possible infringement of mental integrity, human dignity, personal identity, privacy, freedom of thought, and autonomy, among others. Furthermore, issues surrounding accessibility and the potential for neurotech enhancement applications triggers significant concerns, with far-reaching implications for individuals and societies. [pp. 72-73]
Last words about the report
Informative, readable, and thought-provoking. And, it helped broaden my understanding of neurotechnology.
Future endeavours?
I’m hopeful that one of these days one of these groups (UNESCO, Canadian Science Policy Centre, or ???) will tackle the issue of business bankruptcy in the neurotechnology sector. It has already occurred as noted in my ““Going blind when your neural implant company flirts with bankruptcy [long read]” April 5, 2022 posting. That story opens with a woman going blind in a New York subway when her neural implant fails. It’s how she found out the company, which supplied her implant was going out of business.
In my July 7, 2023 posting about the UNESCO July 2023 dialogue on neurotechnology, I’ve included information on Neuralink (one of Elon Musk’s companies) and its approval (despite some investigations) by the US Food and Drug Administration to start human clinical trials. Scroll down about 75% of the way to the “Food for thought” subhead where you will find stories about allegations made against Neuralink.
The end
If you want to know more about the field, the report offers a seven-page bibliography and there’s a lot of material here where you can start with this December 3, 2019 posting “Neural and technological inequalities” which features an article mentioning a discussion between two scientists. Surprisingly (to me), the source article is in Fast Company (a leading progressive business media brand), according to their tagline)..
I have two categories you may want to check: Human Enhancement and Neuromorphic Engineering. There are also a number of tags: neuromorphic computing, machine/flesh, brainlike computing, cyborgs, neural implants, neuroprosthetics, memristors, and more.
Should you have any observations or corrections, please feel free to leave them in the Comments section of this posting.
For the record, it’s one study and details in the news release about how it was constructed and how results were analyzed are scant (more about that later in this post). Nonetheless, an April 28, 2023 news item on ScienceDaily offers an intriguing ChaptGPT possibility,
There has been widespread speculation about how advances in artificial intelligence (AI) assistants like ChatGPT could be used in medicine.
A new study published in JAMA Internal Medicine [JAMA is Journal of the American Medical Association] led by Dr. John W. Ayers from the Qualcomm Institute within the University of California San Diego provides an early glimpse into the role that AI assistants could play in medicine. The study compared written responses from physicians and those from ChatGPT to real-world health questions. A panel of licensed healthcare professionals preferred ChatGPT’s responses 79% of the time and rated ChatGPT’s responses as higher quality and more empathetic.
“The opportunities for improving healthcare with AI are massive,” said Ayers, who is also vice chief of innovation in the UC San Diego School of Medicine Division of Infectious Disease and Global Public Health. “AI-augmented care is the future of medicine.”
Is ChatGPT Ready for Healthcare?
In the new study, the research team set out to answer the question: Can ChatGPT respond accurately to questions patients send to their doctors? If yes, AI models could be integrated into health systems to improve physician responses to questions sent by patients and ease the ever-increasing burden on physicians.
“ChatGPT might be able to pass a medical licensing exam,” said study co-author Dr. Davey Smith, a physician-scientist, co-director of the UC San Diego Altman Clinical and Translational Research Institute and professor at the UC San Diego School of Medicine, “but directly answering patient questions accurately and empathetically is a different ballgame.”
“The COVID-19 pandemic accelerated virtual healthcare adoption,” added study co-author Dr. Eric Leas, a Qualcomm Institute affiliate and assistant professor in the UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science. “While this made accessing care easier for patients, physicians are burdened by a barrage of electronic patient messages seeking medical advice that have contributed to record-breaking levels of physician burnout.”
Designing a Study to Test ChatGPT in a Healthcare Setting
To obtain a large and diverse sample of healthcare questions and physician answers that did not contain identifiable personal information, the team turned to social media where millions of patients publicly post medical questions to which doctors respond: Reddit’s AskDocs.
r/AskDocs is a subreddit with approximately 452,000 members who post medical questions and verified healthcare professionals submit answers. While anyone can respond to a question, moderators verify healthcare professionals’ credentials and responses display the respondent’s level of credentials. The result is a large and diverse set of patient medical questions and accompanying answers from licensed medical professionals.
While some may wonder if question-answer exchanges on social media are a fair test, team members noted that the exchanges were reflective of their clinical experience.
The team randomly sampled 195 exchanges from AskDocs where a verified physician responded to a public question. The team provided the original question to ChatGPT and asked it to author a response. A panel of three licensed healthcare professionals assessed each question and the corresponding responses and were blinded to whether the response originated from a physician or ChatGPT. They compared responses based on information quality and empathy, noting which one they preferred.
The panel of healthcare professional evaluators preferred ChatGPT responses to physician responses 79% of the time.
“ChatGPT messages responded with nuanced and accurate information that often addressed more aspects of the patient’s questions than physician responses,” said Jessica Kelley, a nurse practitioner with San Diego firm Human Longevity and study co-author.
Additionally, ChatGPT responses were rated significantly higher in quality than physician responses: good or very good quality responses were 3.6 times higher for ChatGPT than physicians (physicians 22.1% versus ChatGPT 78.5%). The responses were also more empathic: empathetic or very empathetic responses were 9.8 times higher for ChatGPT than for physicians (physicians 4.6% versus ChatGPT 45.1%).
“I never imagined saying this,” added Dr. Aaron Goodman, an associate clinical professor at UC San Diego School of Medicine and study coauthor, “but ChatGPT is a prescription I’d like to give to my inbox. The tool will transform the way I support my patients.”
Harnessing AI Assistants for Patient Messages
“While our study pitted ChatGPT against physicians, the ultimate solution isn’t throwing your doctor out altogether,” said Dr. Adam Poliak, an assistant professor of Computer Science at Bryn Mawr College and study co-author. “Instead, a physician harnessing ChatGPT is the answer for better and empathetic care.”
“Our study is among the first to show how AI assistants can potentially solve real world healthcare delivery problems,” said Dr. Christopher Longhurst, Chief Medical Officer and Chief Digital Officer at UC San Diego Health. “These results suggest that tools like ChatGPT can efficiently draft high quality, personalized medical advice for review by clinicians, and we are beginning that process at UCSD Health.”
Dr. Mike Hogarth, a physician-bioinformatician, co-director of the Altman Clinical and Translational Research Institute at UC San Diego, professor in the UC San Diego School of Medicine and study co-author, added, “It is important that integrating AI assistants into healthcare messaging be done in the context of a randomized controlled trial to judge how the use of AI assistants impact outcomes for both physicians and patients.”
In addition to improving workflow, investments into AI assistant messaging could impact patient health and physician performance.
Dr. Mark Dredze, the John C Malone Associate Professor of Computer Science at Johns Hopkins and study co-author, noted: “We could use these technologies to train doctors in patient-centered communication, eliminate health disparities suffered by minority populations who often seek healthcare via messaging, build new medical safety systems, and assist doctors by delivering higher quality and more efficient care.”
In addition to Ayers, Poliak, Dredze, Leas, Kelley, Goodman, Longhurst, Hogarth and Smith, authors of the JAMA Internal Medicine paper, “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum” (JAMA Intern Med. doi:10.1001/jamainternmed.2023.1838), are Zechariah Zhu of UC San Diego and Dr. Dennis J. Faix of the Naval Health Research Center.
…
COI Statement Disclosures as reported in the paper: Dr Ayers reported owning equity in companies focused on data analytics, Good Analytics, of which he was CEO until June 2018, and Health Watcher. Dr Dredze reported personal fees from Bloomberg LP and Sickweather outside the submitted work and owning an equity position in Good Analytics. Dr Leas reported personal fees from Good Analytics during the conduct of the study. Dr Goodman reported personal fees from Seattle Genetics outside the submitted work. Dr Hogarth reported being an advisor for LifeLink, a health care chatbot company. Dr Longhurst reported being an advisor and equity holder at Doximity. Dr Smith reported stock options from Linear Therapies, personal fees from Arena Pharmaceuticals, Model Medicines, Pharma Holdings, Bayer Pharmaceuticals, Evidera, Signant Health, Fluxergy, Lucira, and Kiadis outside the submitted work. No other disclosures were reported.
it’s very easy to take research at face value. I do it too so I’m forcing myself to read this news release with a critical eye. here are a few questions I have,
How did this ChatGPT learn to be empathetic? And, how did the researchers measure empathy?
What did they mean by ‘quality’ of response? How did they measure it?
The respondents’ age range would have been interesting and useful to know as age can affect the types of questions being asked.
How did they randomize the physicians taking part in the study? i.e., Does a certain kind of (or age) physician go to the website AskDocs to answer questions?
Sadly I can’t get behind the paywall to see if that information is available in the study but this has been a good reminder (to me and, hopefully, you found it useful, too) to keep asking questions.
In this one week, I’m publishing my first stories (see also June 13, 2023 posting “ChatGPT and a neuromorphic [brainlike] synapse“) where artificial intelligence (AI) software is combined with a memristor (hardware component) for brainlike (neuromorphic) computing.
Everyone is talking about the newest AI and the power of neural networks, forgetting that software is limited by the hardware on which it runs. But it is hardware, says USC [University of Southern California] Professor of Electrical and Computer Engineering Joshua Yang, that has become “the bottleneck.” Now, Yang’s new research with collaborators might change that. They believe that they have developed a new type of chip with the best memory of any chip thus far for edge AI (AI in portable devices).
For approximately the past 30 years, while the size of the neural networks needed for AI and data science applications doubled every 3.5 months, the hardware capability needed to process them doubled only every 3.5 years. According to Yang, hardware presents a more and more severe problem for which few have patience.
Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices. Yang’s work falls into the middle—focusing on exploiting and combining the advantages of the new materials and traditional silicon technology that could support heavy AI and data science computation.
Their new paper in Nature focuses on the understanding of fundamental physics that leads to a drastic increase in memory capacity needed for AI hardware. The team led by Yang, with researchers from USC (including Han Wang’s group), MIT [Massachusetts Institute of Technology], and the University of Massachusetts, developed a protocol for devices to reduce “noise” and demonstrated the practicality of using this protocol in integrated chips. This demonstration was made at TetraMem, a startup company co-founded by Yang and his co-authors (Miao Hu, Qiangfei Xia, and Glenn Ge), to commercialize AI acceleration technology. According to Yang, this new memory chip has the highest information density per device (11 bits) among all types of known memory technologies thus far. Such small but powerful devices could play a critical role in bringing incredible power to the devices in our pockets. The chips are not just for memory but also for the processor. And millions of them in a small chip, working in parallel to rapidly run your AI tasks, could only require a small battery to power it.
The chips that Yang and his colleagues are creating combine silicon with metal oxide memristors in order to create powerful but low-energy intensive chips. The technique focuses on using the positions of atoms to represent information rather than the number of electrons (which is the current technique involved in computations on chips). The positions of the atoms offer a compact and stable way to store more information in an analog, instead of digital fashion. Moreover, the information can be processed where it is stored instead of being sent to one of the few dedicated ‘processors,’ eliminating the so-called ‘von Neumann bottleneck’ existing in current computing systems. In this way, says Yang, computing for AI is “more energy efficient with a higher throughput.”
How it works:
Yang explains that electrons which are manipulated in traditional chips, are “light.” And this lightness, makes them prone to moving around and being more volatile. Instead of storing memory through electrons, Yang and collaborators are storing memory in full atoms. Here is why this memory matters. Normally, says Yang, when one turns off a computer, the information memory is gone—but if you need that memory to run a new computation and your computer needs the information all over again, you have lost both time and energy. This new method, focusing on activating atoms rather than electrons, does not require battery power to maintain stored information. Similar scenarios happen in AI computations, where a stable memory capable of high information density is crucial. Yang imagines this new tech that may enable powerful AI capability in edge devices, such as Google Glasses, which he says previously suffered from a frequent recharging issue.
Further, by converting chips to rely on atoms as opposed to electrons, chips become smaller. Yang adds that with this new method, there is more computing capacity at a smaller scale. And this method, he says, could offer “many more levels of memory to help increase information density.”
To put it in context, right now, ChatGPT is running on a cloud. The new innovation, followed by some further development, could put the power of a mini version of ChatGPT in everyone’s personal device. It could make such high-powered tech more affordable and accessible for all sorts of applications.
Here’s a link to and a citation for the paper,
Thousands of conductance levels in memristors integrated on CMOS by Mingyi Rao, Hao Tang, Jiangbin Wu, Wenhao Song, Max Zhang, Wenbo Yin, Ye Zhuo, Fatemeh Kiani, Benjamin Chen, Xiangqi Jiang, Hefei Liu, Hung-Yu Chen, Rivu Midya, Fan Ye, Hao Jiang, Zhongrui Wang, Mingche Wu, Miao Hu, Han Wang, Qiangfei Xia, Ning Ge, Ju Li & J. Joshua Yang. Nature volume 615, pages 823–829 (2023) DOI: https://doi.org/10.1038/s41586-023-05759-5 Issue Date: 30 March 2023 Published: 29 March 2023
I was teaching an introductory course about nanotechnology back in 2014 and, at the end of a session, stated (more or less) that the full potential for artificial intelligence (software) wasn’t going to be perceived until the hardware (memistors) was part of the package. (It’s interesting to revisit that in light of the recent uproar around AI (covered in my May 25, 2023 posting, which offered a survey of the situation.)
One of the major problems with artificial intelligence is its memory. The other is energy consumption. Both problems could be addressed by the integration of memristors into the hardware, giving rise to neuromorphic (brainlike) computing. (For those who don’t know, the human brain in addition to its capacity for memory is remarkably energy efficient.)
This is the first time I’ve seen research into memristors where software has been included. Disclaimer: There may be a lot more research of this type; I just haven’t seen it before. A March 24, 2023 news item on ScienceDaily announces research from Korea,
ChatGPT’s impact extends beyond the education sector and is causing significant changes in other areas. The AI language model is recognized for its ability to perform various tasks, including paper writing, translation, coding, and more, all through question-and-answer-based interactions. The AI system relies on deep learning, which requires extensive training to minimize errors, resulting in frequent data transfers between memory and processors. However, traditional digital computer systems’ von Neumann architecture separates the storage and computation of information, resulting in increased power consumption and significant delays in AI computations. Researchers have developed semiconductor technologies suitable for AI applications to address this challenge.
A research team at POSTECH, led by Professor Yoonyoung Chung (Department of Electrical Engineering, Department of Semiconductor Engineering), Professor Seyoung Kim (Department of Materials Science and Engineering, Department of Semiconductor Engineering), and Ph.D. candidate Seongmin Park (Department of Electrical Engineering), has developed a high-performance AI semiconductor device [emphasis mine] using indium gallium zinc oxide (IGZO), an oxide semiconductor widely used in OLED [organic light-emitting diode] displays. The new device has proven to be excellent in terms of performance and power efficiency.
Efficient AI operations, such as those of ChatGPT, require computations to occur within the memory responsible for storing information. Unfortunately, previous AI semiconductor technologies were limited in meeting all the requirements, such as linear and symmetric programming and uniformity, to improve AI accuracy.
The research team sought IGZO as a key material for AI computations that could be mass-produced and provide uniformity, durability, and computing accuracy. This compound comprises four atoms in a fixed ratio of indium, gallium, zinc, and oxygen and has excellent electron mobility and leakage current properties, which have made it a backplane of the OLED display.
Using this material, the researchers developed a novel synapse device [emphasis mine] composed of two transistors interconnected through a storage node. The precise control of this node’s charging and discharging speed has enabled the AI semiconductor to meet the diverse performance metrics required for high-level performance. Furthermore, applying synaptic devices to a large-scale AI system requires the output current of synaptic devices to be minimized. The researchers confirmed the possibility of utilizing the ultra-thin film insulators inside the transistors to control the current, making them suitable for large-scale AI.
The researchers used the newly developed synaptic device to train and classify handwritten data, achieving a high accuracy of over 98%, [emphasis mine] which verifies its potential application in high-accuracy AI systems in the future.
Professor Chung explained, “The significance of my research team’s achievement is that we overcame the limitations of conventional AI semiconductor technologies that focused solely on material development. To do this, we utilized materials already in mass production. Furthermore, Linear and symmetrical programming characteristics were obtained through a new structure using two transistors as one synaptic device. Thus, our successful development and application of this new AI semiconductor technology show great potential to improve the efficiency and accuracy of AI.”
This study was published last week [March 2023] on the inside back cover of Advanced Electronic Materials [paper edition] and was supported by the Next-Generation Intelligent Semiconductor Technology Development Program through the National Research Foundation, funded by the Ministry of Science and ICT [Information and Communication Technologies] of Korea.
Also, there is another approach to using materials such as indium gallium zinc oxide (IGZO) for a memristor. That would be using biological cells as my June 6, 2023 posting, which features work on biological neural networks (BNNs), suggests in relation to creating robots that can perform brainlike computing.
It’s fascinating to see all the current excitement (distressed and/or enthusiastic) around the act of writing and artificial intelligence. Easy to forget that it’s not new. First, the ‘non-human authors’ and then the panic(s). *What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.*
How to handle non-human authors (ChatGPT and other AI agents)—the medical edition
The folks at the Journal of the American Medical Association (JAMA) have recently adopted a pragmatic approach to the possibility of nonhuman authors of scientific and medical papers, from a January 31, 2022 JAMA editorial,
Artificial intelligence (AI) technologies to help authors improve the preparation and quality of their manuscripts and published articles are rapidly increasing in number and sophistication. These include tools to assist with writing, grammar, language, references, statistical analysis, and reporting standards. Editors and publishers also use AI-assisted tools for myriad purposes, including to screen submissions for problems (eg, plagiarism, image manipulation, ethical issues), triage submissions, validate references, edit, and code content for publication in different media and to facilitate postpublication search and discoverability..1
In November 2022, OpenAI released a new open source, natural language processing tool called ChatGPT.2,3 ChatGPT is an evolution of a chatbot that is designed to simulate human conversation in response to prompts or questions (GPT stands for “generative pretrained transformer”). The release has prompted immediate excitement about its many potential uses4 but also trepidation about potential misuse, such as concerns about using the language model to cheat on homework assignments, write student essays, and take examinations, including medical licensing examinations.5 In January 2023, Nature reported on 2 preprints and 2 articles published in the science and health fields that included ChatGPT as a bylined author.6 Each of these includes an affiliation for ChatGPT, and 1 of the articles includes an email address for the nonhuman “author.” According to Nature, that article’s inclusion of ChatGPT in the author byline was an “error that will soon be corrected.”6 However, these articles and their nonhuman “authors” have already been indexed in PubMed and Google Scholar.
Nature has since defined a policy to guide the use of large-scale language models in scientific publication, which prohibits naming of such tools as a “credited author on a research paper” because “attribution of authorship carries with it accountability for the work, and AI tools cannot take such responsibility.”7 The policy also advises researchers who use these tools to document this use in the Methods or Acknowledgment sections of manuscripts.7 Other journals8,9 and organizations10 are swiftly developing policies that ban inclusion of these nonhuman technologies as “authors” and that range from prohibiting the inclusion of AI-generated text in submitted work8 to requiring full transparency, responsibility, and accountability for how such tools are used and reported in scholarly publication.9,10 The International Conference on Machine Learning, which issues calls for papers to be reviewed and discussed at its conferences, has also announced a new policy: “Papers that include text generated from a large-scale language model (LLM) such as ChatGPT are prohibited unless the produced text is presented as a part of the paper’s experimental analysis.”11 The society notes that this policy has generated a flurry of questions and that it plans “to investigate and discuss the impact, both positive and negative, of LLMs on reviewing and publishing in the field of machine learning and AI” and will revisit the policy in the future.11
…
This is a link to and a citation for the JAMA editorial,
Dr. Andrew Maynard (scientist, author, and professor of Advanced Technology Transitions in the Arizona State University [ASU] School for the Future if Innovation in Society and founder of the ASU Future of Being Human initiative and Director of the ASU Risk Innovation Nexus) also takes a pragmatic approach in a March 14, 2023 posting on his eponymous blog,
Like many of my colleagues, I’ve been grappling with how ChatGPT and other Large Language Models (LLMs) are impacting teaching and education — especially at the undergraduate level.
…
We’re already seeing signs of the challenges here as a growing divide emerges between LLM-savvy students who are experimenting with novel ways of using (and abusing) tools like ChatGPT, and educators who are desperately trying to catch up. As a result, educators are increasingly finding themselves unprepared and poorly equipped to navigate near-real-time innovations in how students are using these tools. And this is only exacerbated where their knowledge of what is emerging is several steps behind that of their students.
To help address this immediate need, a number of colleagues and I compiled a practical set of Frequently Asked Questions on ChatGPT in the classroom. These covers the basics of what ChatGPT is, possible concerns over use by students, potential creative ways of using the tool to enhance learning, and suggestions for class-specific guidelines.
Crawford Kilian, a longtime educator, author, and contributing editor to The Tyee, expresses measured enthusiasm for the new technology (as does Dr. Maynard), in a December 13, 2022 article for thetyee.ca, Note: Links have been removed,
…
ChatGPT, its makers tell us, is still in beta form. Like a million other new users, I’ve been teaching it (tuition-free) so its answers will improve. It’s pretty easy to run a tutorial: once you’ve created an account, you’re invited to ask a question or give a command. Then you watch the reply, popping up on the screen at the speed of a fast and very accurate typist.
…
Early responses to ChatGPT have been largely Luddite: critics have warned that its arrival means the end of high school English, the demise of the college essay and so on. But remember that the Luddites were highly skilled weavers who commanded high prices for their products; they could see that newfangled mechanized looms would produce cheap fabrics that would push good weavers out of the market. ChatGPT, with sufficient tweaks, could do just that to educators and other knowledge workers.
Having spent 40 years trying to teach my students how to write, I have mixed feelings about this prospect. But it wouldn’t be the first time that a technological advancement has resulted in the atrophy of a human mental skill.
Writing arguably reduced our ability to memorize — and to speak with memorable and persuasive coherence. …
…
Writing and other technological “advances” have made us what we are today — powerful, but also powerfully dangerous to ourselves and our world. If we can just think through the implications of ChatGPT, we may create companions and mentors that are not so much demonic as the angels of our better nature.
More than writing: emergent behaviour
The ChatGPT story extends further than writing and chatting. From a March 6, 2023 article by Stephen Ornes for Quanta Magazine, Note: Links have been removed,
What movie do these emojis describe?
That prompt was one of 204 tasks chosen last year to test the ability of various large language models (LLMs) — the computational engines behind AI chatbots such as ChatGPT. The simplest LLMs produced surreal responses. “The movie is a movie about a man who is a man who is a man,” one began. Medium-complexity models came closer, guessing The Emoji Movie. But the most complex model nailed it in one guess: Finding Nemo.
“Despite trying to expect surprises, I’m surprised at the things these models can do,” said Ethan Dyer, a computer scientist at Google Research who helped organize the test. It’s surprising because these models supposedly have one directive: to accept a string of text as input and predict what comes next, over and over, based purely on statistics. Computer scientists anticipated that scaling up would boost performance on known tasks, but they didn’t expect the models to suddenly handle so many new, unpredictable ones.
…
“That language models can do these sort of things was never discussed in any literature that I’m aware of,” said Rishi Bommasani, a computer scientist at Stanford University. Last year, he helped compile a list of dozens of emergent behaviors [emphasis mine], including several identified in Dyer’s project. That list continues to grow.
Now, researchers are racing not only to identify additional emergent abilities but also to figure out why and how they occur at all — in essence, to try to predict unpredictability. Understanding emergence could reveal answers to deep questions around AI and machine learning in general, like whether complex models are truly doing something new or just getting really good at statistics. It could also help researchers harness potential benefits and curtail emergent risks.
…
Biologists, physicists, ecologists and other scientists use the term “emergent” to describe self-organizing, collective behaviors that appear when a large collection of things acts as one. Combinations of lifeless atoms give rise to living cells; water molecules create waves; murmurations of starlings swoop through the sky in changing but identifiable patterns; cells make muscles move and hearts beat. Critically, emergent abilities show up in systems that involve lots of individual parts. But researchers have only recently been able to document these abilities in LLMs as those models have grown to enormous sizes.
…
But the debut of LLMs also brought something truly unexpected. Lots of somethings. With the advent of models like GPT-3, which has 175 billion parameters — or Google’s PaLM, which can be scaled up to 540 billion — users began describing more and more emergent behaviors. One DeepMind engineer even reported being able to convince ChatGPT that it was a Linux terminal and getting it to run some simple mathematical code to compute the first 10 prime numbers. Remarkably, it could finish the task faster than the same code running on a real Linux machine.
As with the movie emoji task, researchers had no reason to think that a language model built to predict text would convincingly imitate a computer terminal. Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli [Deep Ganguli, a computer scientist at the AI startup Anthropic] said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”
…
There is an obvious problem with asking these models to explain themselves: They are notorious liars. [emphasis mine] “We’re increasingly relying on these models to do basic work,” Ganguli said, “but I do not just trust these. I check their work.” As one of many amusing examples, in February [2023] Google introduced its AI chatbot, Bard. The blog post announcing the new tool shows Bard making a factual error.
Perhaps not entirely unrelated to current developments, there was this announcement in a May 1, 2023 article by Hannah Alberga for CTV (Canadian Television Network) news, Note: Links have been removed,
Toronto’s pioneer of artificial intelligence quits Google to openly discuss dangers of AI
…
Geoffrey Hinton, professor at the University of Toronto and the “godfather” of deep learning – a field of artificial intelligence that mimics the human brain – announced his departure from the company on Monday [May 1, 2023] citing the desire to freely discuss the implications of deep learning and artificial intelligence, and the possible consequences if it were utilized by “bad actors.”
Hinton, a British-Canadian computer scientist, is best-known for a series of deep neural network breakthroughs that won him, Yann LeCun and Yoshua Bengio the 2018 Turing Award, known as the Nobel Prize of computing.
…
Hinton has been invested in the now-hot topic of artificial intelligence since its early stages. In 1970, he got a Bachelor of Arts in experimental psychology from Cambridge, followed by his Ph.D. in artificial intelligence in Edinburgh, U.K. in 1978.
He joined Google after spearheading a major breakthrough with two of his graduate students at the University of Toronto in 2012, in which the team uncovered and built a new method of artificial intelligence: neural networks. The team’s first neural network was incorporated and sold to Google for $44 million.
Neural networks are a method of deep learning that effectively teaches computers how to learn the way humans do by analyzing data, paving the way for machines to classify objects and understand speech recognition.
…
There’s a bit more from Hinton in a May 3, 2023 article by Sheena Goodyear for the Canadian Broadcasting Corporation’s (CBC) radio programme, As It Happens (the 10 minute radio interview is embedded in the article), Note: A link has been removed,
There was a time when Geoffrey Hinton thought artificial intelligence would never surpass human intelligence — at least not within our lifetimes.
Nowadays, he’s not so sure.
“I think that it’s conceivable that this kind of advanced intelligence could just take over from us,” the renowned British-Canadian computer scientist told As It Happens host Nil Köksal. “It would mean the end of people.”
…
For the last decade, he [Geoffrey Hinton] divided his career between teaching at the University of Toronto and working for Google’s deep-learning artificial intelligence team. But this week, he announced his resignation from Google in an interview with the New York Times.
Now Hinton is speaking out about what he fears are the greatest dangers posed by his life’s work, including governments using AI to manipulate elections or create “robot soldiers.”
But other experts in the field of AI caution against his visions of a hypothetical dystopian future, saying they generate unnecessary fear, distract from the very real and immediate problems currently posed by AI, and allow bad actors to shirk responsibility when they wield AI for nefarious purposes.
…
Ivana Bartoletti, founder of the Women Leading in AI Network, says dwelling on dystopian visions of an AI-led future can do us more harm than good.
“It’s important that people understand that, to an extent, we are at a crossroads,” said Bartoletti, chief privacy officer at the IT firm Wipro.
“My concern about these warnings, however, is that we focus on the sort of apocalyptic scenario, and that takes us away from the risks that we face here and now, and opportunities to get it right here and now.”
…
Ziv Epstein, a PhD candidate at the Massachusetts Institute of Technology who studies the impacts of technology on society, says the problems posed by AI are very real, and he’s glad Hinton is “raising the alarm bells about this thing.”
“That being said, I do think that some of these ideas that … AI supercomputers are going to ‘wake up’ and take over, I personally believe that these stories are speculative at best and kind of represent sci-fi fantasy that can monger fear” and distract from more pressing issues, he said.
He especially cautions against language that anthropomorphizes — or, in other words, humanizes — AI.
…
“It’s absolutely possible I’m wrong. We’re in a period of huge uncertainty where we really don’t know what’s going to happen,” he [Hinton] said.
Don Pittis in his May 4, 2022 business analysis for CBC news online offers a somewhat jaundiced view of Hinton’s concern regarding AI, Note: Links have been removed,
As if we needed one more thing to terrify us, the latest warning from a University of Toronto scientist considered by many to be the founding intellect of artificial intelligence, adds a new layer of dread.
Others who have warned in the past that thinking machines are a threat to human existence seem a little miffed with the rock-star-like media coverage Geoffrey Hinton, billed at a conference this week as the Godfather of AI, is getting for what seems like a last minute conversion. Others say Hinton’s authoritative voice makes a difference.
…
Not only did Hinton tell an audience of experts at Wednesday’s [May 3, 2023] EmTech Digital conference that humans will soon be supplanted by AI — “I think it’s serious and fairly close.” — he said that due to national and business competition, there is no obvious way to prevent it.
“What we want is some way of making sure that even if they’re smarter than us, they’re going to do things that are beneficial,” said Hinton on Wednesday [May 3, 2023] as he explained his change of heart in detailed technical terms.
“But we need to try and do that in a world where there’s bad actors who want to build robot soldiers that kill people and it seems very hard to me.”
“I wish I had a nice and simple solution I could push, but I don’t,” he said. “It’s not clear there is a solution.”
So when is all this happening?
“In a few years time they may be significantly more intelligent than people,” he told Nil Köksal on CBC Radio’s As It Happens on Wednesday [May 3, 2023].
While he may be late to the party, Hinton’s voice adds new clout to growing anxiety that artificial general intelligence, or AGI, has now joined climate change and nuclear Armageddon as ways for humans to extinguish themselves.
But long before that final day, he worries that the new technology will soon begin to strip away jobs and lead to a destabilizing societal gap between rich and poor that current politics will be unable to solve.
The EmTech Digital conference is a who’s who of AI business and academia, fields which often overlap. Most other participants at the event were not there to warn about AI like Hinton, but to celebrate the explosive growth of AI research and business.
…
As one expert I spoke to pointed out, the growth in AI is exponential and has been for a long time. But even knowing that, the increase in the dollar value of AI to business caught the sector by surprise.
Eight years ago when I wrote about the expected increase in AI business, I quoted the market intelligence group Tractica that AI spending would “be worth more than $40 billion in the coming decade,” which sounded like a lot at the time. It appears that was an underestimate.
“The global artificial intelligence market size was valued at $428 billion U.S. in 2022,” said an updated report from Fortune Business Insights. “The market is projected to grow from $515.31 billion U.S. in 2023.” The estimate for 2030 is more than $2 trillion.
…
This week the new Toronto AI company Cohere, where Hinton has a stake of his own, announced it was “in advanced talks” to raise $250 million. The Canadian media company Thomson Reuters said it was planning “a deeper investment in artificial intelligence.” IBM is expected to “pause hiring for roles that could be replaced with AI.” The founders of Google DeepMind and LinkedIn have launched a ChatGPT competitor called Pi.
And that was just this week.
…
“My one hope is that, because if we allow it to take over it will be bad for all of us, we could get the U.S. and China to agree, like we did with nuclear weapons,” said Hinton. “We’re all the in same boat with respect to existential threats, so we all ought to be able to co-operate on trying to stop it.”
Interviewer and moderator Will Douglas Heaven, an editor at MIT Technology Review finished Hinton’s sentence for him: “As long as we can make some money on the way.”
Geoffrey Hinton, the 75-year-old computer scientist known as the “Godfather of AI,” made headlines this week after resigning from Google to sound the alarm about the technology he helped create. In a series of high-profile interviews, the machine learning pioneer has speculated that AI will surpass humans in intelligence and could even learn to manipulate or kill people on its own accord.
But women who for years have been speaking out about AI’s problems—even at the expense of their jobs—say Hinton’s alarmism isn’t just opportunistic but also overshadows specific warnings about AI’s actual impacts on marginalized people.
“It’s disappointing to see this autumn-years redemption tour [emphasis mine] from someone who didn’t really show up” for other Google dissenters, says Meredith Whittaker, president of the Signal Foundation and an AI researcher who says she was pushed out of Google in 2019 in part over her activism against the company’s contract to build machine vision technology for U.S. military drones. (Google has maintained that Whittaker chose to resign.)
…
Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.
“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”
Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. …
…
… An open letter in support of Gebru was signed by nearly 2,700 Googlers in 2020, but Hinton wasn’t one of them.
Instead, Hinton has used the spotlight to downplay Gebru’s voice. In an appearance on CNN Tuesday [May 2, 2023], for example, he dismissed a question from Jake Tapper about whether he should have stood up for Gebru, saying her ideas “aren’t as existentially serious as the idea of these things getting more intelligent than us and taking over.” [emphasis mine]
…
Gebru has been mentioned here a few times. She’s mentioned in passing in a June 23, 2022 posting “Racist and sexist robots have flawed AI” and in a little more detail in an August 30, 2022 posting “Should AI algorithms get patents for their inventions and is anyone talking about copyright for texts written by AI algorithms?” scroll down to the ‘Consciousness and ethical AI’ subhead
Chan has another Fast Company article investigating AI issues also published on May 5, 2023, “Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them.”
The last two existential AI panics
The term “autumn-years redemption tour”is striking and while the reference to age could be viewed as problematic, it also hints at the money, honours, and acknowledgement that Hinton has enjoyed as an eminent scientist. I’ve covered two previous panics set off by eminent scientists. “Existential risk” is the title of my November 26, 2012 posting which highlights Martin Rees’ efforts to found the Centre for Existential Risk at the University of Cambridge.
Rees is a big deal. From his Wikipedia entry, Note: Links have been removed,
Martin John Rees, Baron Rees of Ludlow OM FRS FREng FMedSci FRAS HonFInstP[10][2] (born 23 June 1942) is a British cosmologist and astrophysicist.[11] He is the fifteenth Astronomer Royal, appointed in 1995,[12][13][14] and was Master of Trinity College, Cambridge, from 2004 to 2012 and President of the Royal Society between 2005 and 2010.[15][16][17][18][19][20]
The next panic was set off by Stephen Hawking (1942 – 2018; also at the University of Cambridge, Wikipedia entry) a few years before he died. (Note: Rees, Hinton, and Hawking were all born within five years of each other and all have/had ties to the University of Cambridge. Interesting coincidence, eh?) From a January 9, 2015 article by Emily Chung for CBC news online,
Machines turning on their creators has been a popular theme in books and movies for decades, but very serious people are starting to take the subject very seriously. Physicist Stephen Hawking says, “the development of full artificial intelligence could spell the end of the human race.” Tesla Motors and SpaceX founder Elon Musk suggests that AI is probably “our biggest existential threat.”
Artificial intelligence experts say there are good reasons to pay attention to the fears expressed by big minds like Hawking and Musk — and to do something about it while there is still time.
Hawking made his most recent comments at the beginning of December [2014], in response to a question about an upgrade to the technology he uses to communicate, He relies on the device because he has amyotrophic lateral sclerosis, a degenerative disease that affects his ability to move and speak.
…
Popular works of science fiction – from the latest Terminator trailer, to the Matrix trilogy, to Star Trek’s borg – envision that beyond that irreversible historic event, machines will destroy, enslave or assimilate us, says Canadian science fiction writer Robert J. Sawyer.
Sawyer has written about a different vision of life beyond singularity [when machines surpass humans in general intelligence,] — one in which machines and humans work together for their mutual benefit. But he also sits on a couple of committees at the Lifeboat Foundation, a non-profit group that looks at future threats to the existence of humanity, including those posed by the “possible misuse of powerful technologies” such as AI. He said Hawking and Musk have good reason to be concerned.
To sum up, the first panic was in 2012, the next in 2014/15, and the latest one began earlier this year (2023) with a letter. A March 29, 2023 Thompson Reuters news item on CBC news online provides information on the contents,
Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI’s newly launched GPT-4, in an open letter citing potential risks to society and humanity.
Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarizing lengthy documents.
The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.
…
Co-signatories included Stability AI CEO Emad Mostaque, researchers at Alphabet-owned DeepMind, and AI heavyweights Yoshua Bengio, often referred to as one of the “godfathers of AI,” and Stuart Russell, a pioneer of research in the field.
According to the European Union’s transparency register, the Future of Life Institute is primarily funded by the Musk Foundation, as well as London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.
The concerns come as EU police force Europol on Monday {March 27, 2023] joined a chorus of ethical and legal concerns over advanced AI like ChatGPT, warning about the potential misuse of the system in phishing attempts, disinformation and cybercrime.
Meanwhile, the U.K. government unveiled proposals for an “adaptable” regulatory framework around AI.
The government’s approach, outlined in a policy paper published on Wednesday [March 29, 2023], would split responsibility for governing artificial intelligence (AI) between its regulators for human rights, health and safety, and competition, rather than create a new body dedicated to the technology.
…
The engineers have chimed in, from an April 7, 2023 article by Margo Anderson for the IEEE (institute of Electrical and Electronics Engineers) Spectrum magazine, Note: Links have been removed,
The open letter [published March 29, 2023], titled “Pause Giant AI Experiments,” was organized by the nonprofit Future of Life Institute and signed by more than 27,565 people (as of 8 May). It calls for cessation of research on “all AI systems more powerful than GPT-4.”
It’s the latest of a host of recent “AI pause” proposals including a suggestion by Google’s François Chollet of a six-month “moratorium on people overreacting to LLMs” in either direction.
In the news media, the open letter has inspired straight reportage, critical accounts for not going far enough (“shut it all down,” Eliezer Yudkowsky wrote in Time magazine), as well as critical accounts for being both a mess and an alarmist distraction that overlooks the real AI challenges ahead.
IEEE members have expressed a similar diversity of opinions.
…
There was an earlier open letter in January 2015 according to Wikipedia’s “Open Letter on Artificial Intelligence” entry, Note: Links have been removed,
In January 2015, Stephen Hawking, Elon Musk, and dozens of artificial intelligence experts[1] signed an open letter on artificial intelligence calling for research on the societal impacts of AI. The letter affirmed that society can reap great potential benefits from artificial intelligence, but called for concrete research on how to prevent certain potential “pitfalls”: artificial intelligence has the potential to eradicate disease and poverty, but researchers must not create something which is unsafe or uncontrollable.[1] The four-paragraph letter, titled “Research Priorities for Robust and Beneficial Artificial Intelligence: An Open Letter”, lays out detailed research priorities in an accompanying twelve-page document.
…
As for ‘Mr. ChatGPT’ or Sam Altman, CEO of OpenAI, while he didn’t sign the March 29, 2023 letter, he appeared before US Congress suggesting AI needs to be regulated according to May 16, 2023 news article by Mohar Chatterjee for Politico.
You’ll notice I’ve arbitrarily designated three AI panics by assigning their origins to eminent scientists. In reality, these concerns rise and fall in ways that don’t allow for such a tidy analysis. As Chung notes, science fiction regularly addresses this issue. For example, there’s my October 16, 2013 posting, “Wizards & Robots: a comic book encourages study in the sciences and maths and discussions about existential risk.” By the way, will.i.am (of the Black Eyed Peas band was involved in the comic book project and he us a longtime supporter of STEM (science, technology, engineering, and mathematics) initiatives.
Finally (but not quite)
Puzzling, isn’t it? I’m not sure we’re asking the right questions but it’s encouraging to see that at least some are being asked.
Dr. Andrew Maynard in a May 12, 2023 essay for The Conversation (h/t May 12, 2023 item on phys.org) notes that ‘Luddites’ questioned technology’s inevitable progress and were vilified for doing so, Note: Links have been removed,
The term “Luddite” emerged in early 1800s England. At the time there was a thriving textile industry that depended on manual knitting frames and a skilled workforce to create cloth and garments out of cotton and wool. But as the Industrial Revolution gathered momentum, steam-powered mills threatened the livelihood of thousands of artisanal textile workers.
Faced with an industrialized future that threatened their jobs and their professional identity, a growing number of textile workers turned to direct action. Galvanized by their leader, Ned Ludd, they began to smash the machines that they saw as robbing them of their source of income.
It’s not clear whether Ned Ludd was a real person, or simply a figment of folklore invented during a period of upheaval. But his name became synonymous with rejecting disruptive new technologies – an association that lasts to this day.
Questioning doesn’t mean rejecting
Contrary to popular belief, the original Luddites were not anti-technology, nor were they technologically incompetent. Rather, they were skilled adopters and users of the artisanal textile technologies of the time. Their argument was not with technology, per se, but with the ways that wealthy industrialists were robbing them of their way of life
…
In December 2015, Stephen Hawking, Elon Musk and Bill Gates were jointly nominated for a “Luddite Award.” Their sin? Raising concerns over the potential dangers of artificial intelligence.
The irony of three prominent scientists and entrepreneurs being labeled as Luddites underlines the disconnect between the term’s original meaning and its more modern use as an epithet for anyone who doesn’t wholeheartedly and unquestioningly embrace technological progress.
Yet technologists like Musk and Gates aren’t rejecting technology or innovation. Instead, they’re rejecting a worldview that all technological advances are ultimately good for society. This worldview optimistically assumes that the faster humans innovate, the better the future will be.
…
In an age of ChatGPT, gene editing and other transformative technologies, perhaps we all need to channel the spirit of Ned Ludd as we grapple with how to ensure that future technologies do more good than harm.
In fact, “Neo-Luddites” or “New Luddites” is a term that emerged at the end of the 20th century.
In 1990, the psychologist Chellis Glendinning published an essay titled “Notes toward a Neo-Luddite Manifesto.”
…
Then there are the Neo-Luddites who actively reject modern technologies, fearing that they are damaging to society. New York City’s Luddite Club falls into this camp. Formed by a group of tech-disillusioned Gen-Zers, the club advocates the use of flip phones, crafting, hanging out in parks and reading hardcover or paperback books. Screens are an anathema to the group, which sees them as a drain on mental health.
I’m not sure how many of today’s Neo-Luddites – whether they’re thoughtful technologists, technology-rejecting teens or simply people who are uneasy about technological disruption – have read Glendinning’s manifesto. And to be sure, parts of it are rather contentious. Yet there is a common thread here: the idea that technology can lead to personal and societal harm if it is not developed responsibly.
…
Getting back to where this started with nonhuman authors, Amelia Eqbal has written up an informal transcript of a March 16, 2023 CBC radio interview (radio segment is embedded) about ChatGPT-4 (the latest AI chatbot from OpenAI) between host Elamin Abdelmahmoud and tech journalist, Alyssa Bereznak.
I was hoping to add a little more Canadian content, so in March 2023 and again in April 2023, I sent a question about whether there were any policies regarding nonhuman or AI authors to Kim Barnhardt at the Canadian Medical Association Journal (CMAJ). To date, there has been no reply but should one arrive, I will place it here.
In the meantime, I have this from Canadian writer, Susan Baxter in her May 15, 2023 blog posting “Coming soon: Robot Overlords, Sentient AI and more,”
…
The current threat looming (Covid having been declared null and void by the WHO*) is Artificial Intelligence (AI) which, we are told, is becoming too smart for its own good and will soon outsmart humans. Then again, given some of the humans I’ve met along the way that wouldn’t be difficult.
…
All this talk of scary-boo AI seems to me to have become the worst kind of cliché, one that obscures how our lives have become more complicated and more frustrating as apps and bots and cyber-whatsits take over.
The trouble with clichés, as Alain de Botton wrote in How Proust Can Change Your Life, is not that they are wrong or contain false ideas but more that they are “superficial articulations of good ones”. Cliches are oversimplifications that become so commonplace we stop noticing the more serious subtext. (This is rife in medicine where metaphors such as talk of “replacing” organs through transplants makes people believe it’s akin to changing the oil filter in your car. Or whatever it is EV’s have these days that needs replacing.)
…
Should you live in Vancouver (Canada) and are attending a May 28, 2023 AI event, you may want to read Susan Baxter’s piece as a counterbalance to, “Discover the future of artificial intelligence at this unique AI event in Vancouver,” a May 19, 2023 sponsored content by Katy Brennan for the Daily Hive,
…
If you’re intrigued and eager to delve into the rapidly growing field of AI, you’re not going to want to miss this unique Vancouver event.
On Sunday, May 28 [2023], a Multiplatform AI event is coming to the Vancouver Playhouse — and it’s set to take you on a journey into the future of artificial intelligence.
The exciting conference promises a fusion of creativity, tech innovation, and thought–provoking insights, with talks from renowned AI leaders and concept artists, who will share their experiences and opinions.
Guests can look forward to intense discussions about AI’s pros and cons, hear real-world case studies, and learn about the ethical dimensions of AI, its potential threats to humanity, and the laws that govern its use.
Live Q&A sessions will also be held, where leading experts in the field will address all kinds of burning questions from attendees. There will also be a dynamic round table and several other opportunities to connect with industry leaders, pioneers, and like-minded enthusiasts.
…
This conference is being held at The Playhouse, 600 Hamilton Street, from 11 am to 7:30 pm, ticket prices range from $299 to $349 to $499 (depending on when you make your purchase, From the Multiplatform AI Conference homepage,
Event Speakers
Max Sills General Counsel at Midjourney
From Jan 2022 – present (Advisor – now General Counsel) – Midjourney – An independent research lab exploring new mediums of thought and expanding the imaginative powers of the human species (SF) Midjourney – a generative artificial intelligence program and service created and hosted by a San Francisco-based independent research lab Midjourney, Inc. Midjourney generates images from natural language descriptions, called “prompts”, similar to OpenAI’s DALL-E and Stable Diffusion. For now the company uses Discord Server as a source of service and, with huge 15M+ members, is the biggest Discord server in the world. In the two-things-at-once department, Max Sills also known as an owner of Open Advisory Services, firm which is set up to help small and medium tech companies with their legal needs (managing outside counsel, employment, carta, TOS, privacy). Their clients are enterprise level, medium companies and up, and they are here to help anyone on open source and IP strategy. Max is an ex-counsel at Block, ex-general manager of the Crypto Open Patent Alliance. Prior to that Max led Google’s open source legal group for 7 years.
…
So, the first speaker listed is a lawyer associated with Midjourney, a highly controversial generative artificial intelligence programme used to generate images. According to their entry on Wikipedia, the company is being sued, Note: Links have been removed,
…
On January 13, 2023, three artists – Sarah Andersen, Kelly McKernan, and Karla Ortiz – filed a copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists, by training AI tools on five billion images scraped from the web, without the consent of the original artists.[32]
…
My October 24, 2022 posting highlights some of the issues with generative image programmes and Midjourney is mentioned throughout.
As I noted earlier, I’m glad to see more thought being put into the societal impact of AI and somewhat disconcerted by the hyperbole from the like of Geoffrey Hinton and the like of Vancouver’s Multiplatform AI conference organizers. Mike Masnick put it nicely in his May 24, 2023 posting on TechDirt (Note 1: I’ve taken a paragraph out of context, his larger issue is about proposals for legislation; Note 2: Links have been removed),
…
Honestly, this is partly why I’ve been pretty skeptical about the “AI Doomers” who keep telling fanciful stories about how AI is going to kill us all… unless we give more power to a few elite people who seem to think that it’s somehow possible to stop AI tech from advancing. As I noted last month, it is good that some in the AI space are at least conceptually grappling with the impact of what they’re building, but they seem to be doing so in superficial ways, focusing only on the sci-fi dystopian futures they envision, and not things that are legitimately happening today from screwed up algorithms.
…
For anyone interested in the Canadian government attempts to legislate AI, there’s my May 1, 2023 posting, “Canada, AI regulation, and the second reading of the Digital Charter Implementation Act, 2022 (Bill C-27).”
Addendum (June 1, 2023)
Another statement warning about runaway AI was issued on Tuesday, May 30, 2023. This was far briefer than the previous March 2023 warning, from the Center for AI Safety’s “Statement on AI Risk” webpage,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war [followed by a list of signatories] …
Vanessa Romo’s May 30, 2023 article (with contributions from Bobby Allyn) for NPR ([US] National Public Radio) offers an overview of both warnings. Rae Hodge’s May 31, 2023 article for Salon offers a more critical view, Note: Links have been removed,
The artificial intelligence world faced a swarm of stinging backlash Tuesday morning, after more than 350 tech executives and researchers released a public statement declaring that the risks of runaway AI could be on par with those of “nuclear war” and human “extinction.” Among the signatories were some who are actively pursuing the profitable development of the very products their statement warned about — including OpenAI CEO Sam Altman and Google DeepMind CEO Demis Hassabis.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the statement from the non-profit Center for AI Safety said.
But not everyone was shaking in their boots, especially not those who have been charting AI tech moguls’ escalating use of splashy language — and those moguls’ hopes for an elite global AI governance board.
TechCrunch’s Natasha Lomas, whose coverage has been steeped in AI, immediately unravelled the latest panic-push efforts with a detailed rundown of the current table stakes for companies positioning themselves at the front of the fast-emerging AI industry.
“Certainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. And how much more reticent to get together to discuss harms their tools can be seen causing right now,” Lomas wrote.
“Instead of the statement calling for a development pause, which would risk freezing OpenAI’s lead in the generative AI field, it lobbies policymakers to focus on risk mitigation — doing so while OpenAI is simultaneously crowdfunding efforts to shape ‘democratic processes for steering AI,'” Lomas added.
…
The use of scary language and fear as a marketing tool has a long history in tech. And, as the LA Times’ Brian Merchant pointed out in an April column, OpenAI stands to profit significantly from a fear-driven gold rush of enterprise contracts.
“[OpenAI is] almost certainly betting its longer-term future on more partnerships like the one with Microsoft and enterprise deals serving large companies,” Merchant wrote. “That means convincing more corporations that if they want to survive the coming AI-led mass upheaval, they’d better climb aboard.”
…
Fear, after all, is a powerful sales tool.
Romo’s May 30, 2023 article for NPR offers a good overview and, if you have the time, I recommend reading Hodge’s May 31, 2023 article for Salon in its entirety.
*ETA June 8, 2023: This sentence “What follows the ‘nonhuman authors’ is essentially a survey of situation/panic.” was added to the introductory paragraph at the beginning of this post.