Tag Archives: NYU Tandon School of Engineering

Coming soon: Responsible AI at the 35th Canadian Conference on Artificial Intelligence (AI) from 30 May to 3 June, 2022

35 years? How have I not stumbled on this conference before? Anyway, I’m glad to have the news (even if I’m late to the party), from the 35th Canadian Conference on Artificial Intelligence homepage,

The 35th Canadian Conference on Artificial Intelligence will take place virtually in Toronto, Ontario, from 30 May to 3 June, 2022. All presentations and posters will be online, with in-person social events to be scheduled in Toronto for those who are able to attend in-person. Viewing rooms and isolated presentation facilities will be available for all visitors to the University of Toronto during the event.

The event is collocated with the Computer and Robot Vision conferences. These events (AI·CRV 2022) will bring together hundreds of leaders in research, industry, and government, as well as Canada’s most accomplished students. They showcase Canada’s ingenuity, innovation and leadership in intelligent systems and advanced information and communications technology. A single registration lets you attend any session in the two conferences, which are scheduled in parallel tracks.

The conference proceedings are published on PubPub, an open-source, privacy-respecting, and open access online platform. They are submitted to be indexed and abstracted in leading indexing services such as DBLP, ACM, Google Scholar.

You can view last year’s [2021] proceedings here: https://caiac.pubpub.org/ai2021.

The 2021 proceedings appear to be open access.

I can’t tell if ‘Responsible AI’ has been included as a specific topic in previous conferences but 2022 is definitely hosting a couple of sessions based on that theme, from the Responsible AI activities webpage,

Keynote speaker: Julia Stoyanovich

New York University

“Building Data Equity Systems”

Equity as a social concept — treating people differently depending on their endowments and needs to provide equality of outcome rather than equality of treatment — lends a unifying vision for ongoing work to operationalize ethical considerations across technology, law, and society.  In my talk I will present a vision for designing, developing, deploying, and overseeing data-intensive systems that consider equity as an essential objective.  I will discuss ongoing technical work, and will place this work into the broader context of policy, education, and public outreach.

Biography: Julia Stoyanovich is an Institute Associate Professor of Computer Science & Engineering at the Tandon School of Engineering, Associate Professor of Data Science at the Center for Data Science, and Director of the Center for Responsible AI at New York University (NYU).  Her research focuses on responsible data management and analysis: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data science lifecycle.  She established the “Data, Responsibly” consortium and served on the New York City Automated Decision Systems Task Force, by appointment from Mayor de Blasio.  Julia developed and has been teaching courses on Responsible Data Science at NYU, and is a co-creator of an award-winning comic book series on this topic.  In addition to data ethics, Julia works on the management and analysis of preference and voting data, and on querying large evolving graphs. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University, and a B.S. in Computer Science and in Mathematics & Statistics from the University of Massachusetts at Amherst.  She is a recipient of an NSF CAREER award and a Senior Member of the ACM.

Panel on ethical implications of AI

Panelists

Luke Stark, Faculty of Information and Media Studies, Western University

Luke Stark is an Assistant Professor in the Faculty of Information and Media Studies at Western University in London, ON. His work interrogating the historical, social, and ethical impacts of computing and AI technologies has appeared in journals including The Information Society, Social Studies of Science, and New Media & Society, and in popular venues like Slate, The Globe and Mail, and The Boston Globe. Luke was previously a Postdoctoral Researcher in AI ethics at Microsoft Research, and a Postdoctoral Fellow in Sociology at Dartmouth College; he holds a PhD from the Department of Media, Culture, and Communication at New York University, and a BA and MA from the University of Toronto.

Nidhi Hegde, Associate Professor in Computer Science and Amii [Alberta Machine Intelligence Institute] Fellow at the University of Alberta

Nidhi is a Fellow and Canada CIFAR [Canadian Institute for Advanced Research] AI Chair at Amii and an Associate Professor in the Department of Computing Science at the University of Alberta. Before joining UAlberta, she spent many years in industry research labs. Most recently, she was a Research team lead at Borealis AI (a research institute at Royal Bank of Canada), where her team worked on privacy-preserving methods for machine learning models and other applied problems for RBC. Prior to that, she spent many years in research labs in Europe working on a variety of interesting and impactful problems. She was a researcher at Bell Labs, Nokia, in France from January 2015 to March 2018, where she led a new team focussed on Maths and Algorithms for Machine Learning in Networks and Systems, in the Maths and Algorithms group of Bell Labs. She also spent a few years at the Technicolor Paris Research Lab working on social network analysis, smart grids, privacy, and recommendations. Nidhi is an associate editor of the IEEE/ACM Transactions on Networking, and an editor of the Elsevier Performance Evaluation Journal.

Karina Vold, Assistant Professor, Institute for the History and Philosophy of Science and Technology, University of Toronto

Dr. Karina Vold is an Assistant Professor at the Institute for the History and Philosophy of Science and Technology at the University of Toronto. She is also a Faculty Affiliate at the U of T Schwartz Reisman Institute for Technology and Society, a Faculty Associate at the U of T Centre for Ethics, and an Associate Fellow at the University of Cambridge’s Leverhulme Centre for the Future of Intelligence. Vold specialises in Philosophy of Cognitive Science and Philosophy of Artificial Intelligence, and her recent research has focused on human autonomy, cognitive enhancement, extended cognition, and the risks and ethics of AI.

Elissa Strome, Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR

Elissa is Executive Director, Pan-Canadian Artificial Intelligence Strategy at CIFAR, working with research leaders across the country to implement Canada’s national research strategy in AI.  Elissa completed her PhD in Neuroscience from the University of British Columbia in 2006. Following a post-doc at Lund University, in Sweden, she decided to pursue a career in research strategy, policy and leadership. In 2008, she joined the University of Toronto’s Office of the Vice-President, Research and Innovation and was Director of Strategic Initiatives from 2011 to 2015. In that role, she led a small team dedicated to advancing the University’s strategic research priorities, including international institutional research partnerships, the institutional strategy for prestigious national and international research awards, and the establishment of the SOSCIP [Southern Ontario Smart Computing Innovation Platform] research consortium in 2012. From 2015 to 2017, Elissa was Executive Director of SOSCIP, leading the 17-member industry-academic consortium through a major period of growth and expansion, and establishing SOSCIP as Ontario’s leading platform for collaborative research and development in data science and advanced computing.

Tutorial on AI and the Law

Prof. Maura R. Grossman, University of Waterloo, and

Hon. Paul W. Grimm, United States District Court for the District of Maryland

AI applications are becoming more and more ubiquitous in almost every field of endeavor, and the same is true as to the legal industry. This panel, consisting of an experienced lawyer and computer scientist, and a U.S. federal trial court judge, will discuss how AI is currently being used in the legal profession, what adoption has been like since the introduction of AI to law in about 2009, what legal and ethical issues AI applications have raised in the legal system, and how a sitting trial court judge approaches AI evidence, in particular, the determination of whether to admit that AI evidence or not, when they are a non-expert.

How is AI being used in the legal industry today?

What has the legal industry’s reaction been to legal AI applications?

What are some of the biggest legal and ethical issues implicated by legal and other AI applications?

How does a sitting trial court judge evaluate AI evidence when making a determination of whether to admit that AI evidence or not?

What considerations go into the trial judge’s decision?

What happens if the judge is not an expert in AI?  Do they recuse?

You may recognize the name, Julia Stoyanovich, as she was mentioned here in my March 23, 2022 posting titled, The “We are AI” series gives citizens a primer on AI, a series of peer-to-peer workshops aimed at introducing the basics of AI to the public. There’s also a comic book series associated with it and all of the materials are available for free. It’s all there in the posting.

Getting back to the Responsible AI activities webpage,, there’s one more activity and this seems a little less focused on experts,

Virtual Meet and Greet on Responsible AI across Canada

Given the many activities that are fortunately happening around the responsible and ethical aspects of AI here in Canada, we are organizing an event in conjunction with Canadian AI 2022 this year to become familiar with what everyone is doing and what activities they are engaged in.

It would be wonderful to have a unified community here in Canada around responsible AI so we can support each other and find ways to more effectively collaborate and synergize. We are aiming for a casual, discussion-oriented event rather than talks or formal presentations.

The meet and greet will be hosted by Ebrahim Bagheri, Eleni Stroulia and Graham Taylor. If you are interested in participating, please email Ebrahim Bagheri (bagheri@ryerson.ca).

Thank you to the co-chairs for getting the word out about the Responsible AI topic at the conference,

Responsible AI Co-chairs

Ebrahim Bagheri
Professor
Electrical, Computer, and Biomedical Engineering, Ryerson University
Website

Eleni Stroulia
Professor, Department of Computing Science
Acting Vice Dean, Faculty of Science
Director, AI4Society Signature Area
University of Alberta
Website

The organization which hosts these conference has an almost palindromic abbreviation, CAIAC for Canadian Artificial Intelligence Association (CAIA) or Association Intelligence Artificiel Canadien (AIAC). Yes, you do have to read it in English and French and the C at either end gets knocked depending on which language you’re using, which is why it’s almost.

The CAIAC is almost 50 years old (under various previous names) and has its website here.

*April 22, 2022 at 1400 hours PT removed ‘the’ from this section of the headline: “… from 30 May to 3 June, 2022.” and removed period from the end.

Do your physical therapy and act as a citizen scientist at the same time

I gather that recovering from a serious injury and/or surgery can require exercise regimens which help strengthen you but can be mind-numbingly boring. According to a Feb. 23, 30217 New York University Tandon School of Engineering news release (also on EurekAlert), scientists have found a way to make the physical rehabilitation process more meaningful,

Researchers at the NYU Tandon School of Engineering have devised a method by which patients requiring repetitive rehabilitative exercises, such as those prescribed by physical therapists, can voluntarily contribute to scientific projects in which massive data collection and analysis is needed.

Citizen science empowers people with little to no scientific training to participate in research led by professional scientists in different ways. The benefit of such an activity is often bidirectional, whereby professional scientists leverage the effort of a large number of volunteers in data collection or analysis, while the volunteers increase their knowledge on the topic of the scientific endeavor. Tandon researchers added the benefit of performing what can sometimes be boring or painful exercise regimes in a more appealing yet still therapeutic manner.

The citizen science activity they employed entailed the environmental mapping of a polluted body of water (in this case Brooklyn’s Gowanus Canal) with a miniature instrumented boat, which was remotely controlled by the participants through their physical gestures, as tracked by a low-cost motion capture system that does not require the subject to don special equipment. The researchers demonstrated that the natural user interface offers an engaging and effective means for performing environmental monitoring tasks. At the same time, the citizen science activity increased the commitment of the participants, leading to a better motion performance, quantified through an array of objective indices.

Visiting Researcher Eduardo Palermo (of Sapienza University of Rome), Post-doctoral Researcher Jeffrey Laut, Professor of Technology Management and Innovation Oded Nov, late Research Professor Paolo Cappa, and Professor of Mechanical and Aerospace Engineering Maurizio Porfiri provided subjects with a Microsoft Kinect sensor, a markerless human motion tracker capable of estimating three-dimensional coordinates of human joints that was initially designed for gaming but has since been widely repurposed as an input device for natural user interfaces. They asked participants to pilot the boat, controlling thruster speed and steering angle, by lifting one arm away from the trunk and using wrist motions, in effect, mimicking one widely adopted type of rehabilitative exercises based on repetitively performing simple movements with the affected arm. Their results suggest that an inexpensive, off-the-shelf device can offer an engaging means to contribute to important scientific tasks while delivering relevant and efficient physical exercises.

“The study constitutes a first and necessary step toward rehabilitative treatments of the upper limb through citizen science and low-cost markerless optical systems,” Porfiri explains. “Our methodology expands behavioral rehabilitation by providing an engaging and fun natural user interface, a tangible scientific contribution, and an attractive low-cost markerless technology for human motion capture.”

Caption: NYU Tandon researchers reported that volunteers who performed repetitive exercises while contributing as citizen scientists were more effective in their physical therapy motions. In the experiment, the volunteers controlled a small boat monitoring the polluted Gowanus Canal by performing hand and arm motions using the Microsoft Kinect motion capture system. Credit: NYU Tandon, PLoS ONE

Here’s a link to and a citation for the paper,

A Natural User Interface to Integrate Citizen Science and Physical Exercise by Eduardo Palermo, Jeffrey Laut, Oded Nov, Paolo Cappa, Maurizio Porfiri. Public Library of Science (PLoS) http://dx.doi.org/10.1371/journal.pone.0172587 Published: February 23, 2017

This paper is open access.