Tag Archives: Ashwini Asokan

Girl Trouble—UNESCO’s and the World Economic Forum’s Breaking Through Bias in AI panel on International Women’s Day March 8, 2021

What a Monday morning! United Nations Educational, Scientific and Cultural Organization (UNESCO; French: Organisation des Nations unies pour l’éducation, la science et la culture) and the World Economic Forum (WEF) hosted a live webcast (which started at 6 am PST or 1500 CET [3 pm in Paris, France]). The session is available online for viewing both here on UNESCO’s Girl Trouble webpage and here on YouTube. It’s about 2.5 hours long with two separate discussions and a question period after each discussion. You will have a 2 minute wait before seeing any speakers or panelists.

Here’s why you might want to check this out (from the Girl Trouble: Breaking Through The Bias in AI page on the UNESCO website),

UNESCO and the World Economic Forum present Girl Trouble: Breaking Through The Bias in AI on International Women’s Day, 8th March, 3:00 pm – 5:30 pm (CET). This timely round-table brings together a range of leading female voices in tech to confront the deep-rooted gender imbalances skewing the development of artificial intelligence. Today critics charge that AI feeds on biased data-sets, amplifying the existing the anti-female biases of our societies, and that AI is perpetuating harmful stereotypes of women as submissive and subservient. Is it any wonder when only 22% of AI professionals globally are women?

Our panelists are female change-makers in AI. From C-suite professionals taking decisions which affect us all, to women innovating new AI tools and policies to help vulnerable groups, to those courageously exposing injustice and algorithmic biases, we welcome:

Gabriela Ramos, Assistant Director-General of Social and Human Sciences, UNESCO, leading the development of UNESCO’s Recommendation on the Ethics of AI, the first global standard-setting instrument in the field.
Kay Firth-Butterfield, Keynote speaker. Kay was the world’s first chief AI Ethics Officer. As Head of AI & Machine Learning, and a Member of the Executive Committee of the World Economic Forum, Kay develops new alliances to promote awareness of gender bias in AI;
Ashwini Asokan, CEO of Chennai-based AI company, Mad Street Den. She explores how Artificial Intelligence can be applied meaningfully and made accessible to billions across the globe;
Adriana Bora a researcher using machine learning to boost compliance with the UK and Australian Modern Slavery Acts, and to combat modern slavery, including the trafficking of women;
Anne Bioulac, a member of the Women in Africa Initiative, developing AI-enabled online learning to empower African women to use AI in digital entrepreneurship;
Meredith Broussard, a software developer and associate professor of data journalism at New York University, whose research focuses on AI in investigative reporting, with a particular interest in using data analysis for social good ;
Latifa Mohammed Al-AbdulKarim, named by Forbes magazine as one of 100 Brilliant Women in AI Ethics, and as one of the women defining AI in the 21st century;
Wanda Munoz, of the Latin American Human Security Network. One of the Nobel Women’s Initiative’s 2020 peacebuilders, she raises aware-ness around gender-based violence and autonomous weapons;
Nanjira Sambuli, a Member of the UN Secretary General’s High-Level Panel for Digital Cooperation and Advisor for the A+ Alliance for Inclusive Algorithms;
Jutta Williams, Product Manager at Twitter, analyzing how Twitter can improve its models to reduce bias.

There’s an urgent need for more women to participate in and lead the design, development, and deployment of AI systems. Evidence shows that by 2022, 85% of AI projects will deliver erroneous outcomes due to bias.

AI Recruiters searching for female AI specialists online just cannot find them. Companies hiring experts for AI and data science jobs estimate fewer than 1 per cent of the applications they receive come from women. Women and girls are 4 times less likely to know how to programme computers, and 13 times less likely to file for technology patent. They are also less likely to occupy leadership positions in tech companies.

Building on UNESCO’s cutting edge research in this field, and flagship 2019 publication “I’d Blush if I Could”, and policy guidance on gender equality in the 2020 UNESCO Draft Recommendation on the Ethics of Artificial Intelligence, the panel will look at:

1. The 4th industrial revolution is on our doorstop, and gender equality risks being set back decades; What more can we do to attract more women to design jobs in AI, and to support them to take their seats on the boards of tech companies.

2. How can AI help us advance women and girls’ rights in society? And how can we solve the problem of algorithmic gender bias in AI systems?

Women’s leadership in the AI Sector at all levels, from big tech to the start-up AI economy in developing countries will be placed under the micro-scope.

Confession: I set the timer correctly but then forget to set the alarm so I watched the last 1.5 hours (I plan to go back and get the first hour later). Here’s a little of what transpired.

Moderator

Kudos to the moderator, Natashya Gutierrez, for her excellent performance; it can’t have been easy to keep track of the panelists and questions for a period of 2.5 hours,

Natashya Gutierrez, Editor-in-Chief APAC, VICE World News

Natashya is an award-winning multimedia journalist and current Editor in Chief of VICE World News in APAC [Asia-Pacific Countries]. She oversees editorial teams across Australia, Indonesia, India, Hong Kong, Thailand, the Philippines, Singapore, Japan and Korea. Natashya’s reporting specialises on women’s rights. At VICE, she hosts Unequal, a series focused on gender inequality in Asia. She is the recipient of several journalism awards including the Society of Publishers in Asia for reporting on women’s issues, and the Asia Journalism Fellowship. Before VICE, she was part of the founding team of Rappler, an online news network based in the Philippines. She has been selected as one of Asia’s emerging young leaders and named a Development Fellow by the Asia Foundation. Natashya is a graduate of Yale University.

First panel discussion

For anyone who’s going to watch the session, don’t forget it takes about two minutes before there’s sound. The first panel was focused on “the female training and recruitment crisis in AI.’

  • The right people

I have a suspicion that Ashwini Asokan’s comment about getting the ‘right people’ to create the algorithms and make decisions about AI was not meant the way it might sound. I will have to listen again but, at a guess, I think she was suggesting that a bunch of 25 – 35 year old developers (mostly male and working in monoculture environments) is not going to be cognizant of how their mathematical decisions will impact real world lives.

So, getting the ‘right people’ means more inclusive hiring.

  • Is AI always the best solution?

In all the talk about AI, it’s assumed that this technology is the best solution to all problems. One of the panelists (Nanjira Sambuli) suggested an analogue solution (e. g., a book) might be a better solution on occasion.

There are some things that people are better at than AI (can’t remember which panelist said this). That comment hints at something which seems heretical. It challenges the notion that technology is always better than a person.

I once had someone at a bank explain to me that computers were very smart (by implication, smarter than me)—30 years ago The teller was talking about a database.

Adriana Bora (I think) suggested that lived experience should be considered when putting together consultative groups and developer groups.

This theme of AI not being the best solution for all problems came up again in the second panel discussion

Second panel discussion

The second panel was focused on “innovative AI-based solutions to address bias against women.”

  • AI is math and it’s hard

It’s surprisingly easy to forget that AI is math. Meredith Broussard pointed out that most of us (around the world) have a very Hollywood idea about what AI is.

Broussard noted that AI has its limits and there are times when it’s not the right choice.

She made an interesting point in her comment about AI being hard. I don’t think she meant to echo the old cliché ‘math is hard, so it’s not for girls’. The comment seemed to speak to the breadth and depth of the AI sector. Simultaneous with challenging mathematics, we need to take into account so much more than was imagined in the Industrial Revolution when ecological consequences were unimagined and inequities often taken as god-given.

  • Inequities and language

Natashya Gutierrez, the moderator, noted that AI doesn’t create bias, it magnifies it.

One of the panelists, Jutta Williams (Twitter), noted later that algorithms are designed to favour certain types of language, e. g., information presented as factual rather than emotive. That’s how you get more attention on social media platforms. In essence, the bias in the algorithms was not towards males but towards the way they tend to communicate.

  • Laziness

Describing engineers as ‘lazy’, Meredith Broussard added this about the mindset, ‘write once, run anywhere’.

A colleague, some years ago, drew my attention to the problem. She was unsuccessfully trying to get the developers to fix a problem in the code. They simply couldn’t be bothered. It wasn’t an interesting problem and there was no reward for fixing it.

I’m having a problem now where I suspect engineers/developers don’t want to tweak or correct code in WordPress. It’s the software I use to create my blog postings and I use tags to make those postings easier to find.

Sometime in December 2018 I updated my blog software to their latest version. Many problems ensued but there is one which persists to this day. I can’t tag any new words with apostrophes in them (very common in French). The system refuses to save them.

Previous versions of WordPress were quite capable of saving words with apostrophes. Those words are still in my ‘tag database’.

  • Older generation has less tech savvy

Adriana Bora suggested that the older generation should also be considered in discussions about AI and inclusivity. I’m glad to hear her mention.

Unfortunately, she seemed to be under the impression that seniors don’t know much about technology.

Yes and no. Who do you think built and developed the technologies you are currently using? Probably your parents and grandparents. Networks were first developed in the early to mid-1960s. The Internet is approximately 40 years old. (You can get the details in the History of the Internet entry on Wikipedia.)

Yes, I’ve made that mistake about seniors/elders too.

It’s possible that person over … what age is that? Over 55? Over 60? Over 65? Over 75? and so on … Anyway, that person may not have had much experience with the digital world or it may be dated experience but that assumption is problematic.

As an antidote, here’s one of my favourite blogs, Grandma Got STEM. It’s mostly written by people reminiscing about their STEM mothers and grandmothers.

  • Bits and bobs

There seemed to be general agreement that there needs to be more transparency about the development of AI and what happens in the ‘AI black box’.

Gabriela Ramos, keynote speaker, commented that transparency needs to be paired up with choice otherwise it won’t do much good.

After recounting a distressing story about how activists have had their personal revealed in various networks, Wanda Munoz noted that AI can be used for good.

The concerns are not theoretical and my final comments

Munoz, of course, brought a real life example of bad things happening but I’d like to reinforce it with one more example. The British Broadcasting Corporation (BBC) in a January 13, 2021 news article by Leo Kelion broke the news that Huawei, a Chinese technology company, had technology that could identify ethnic groups (Note: Links have been removed),

A Huawei patent has been brought to light for a system that identifies people who appear to be of Uighur origin among images of pedestrians.

The filing is one of several of its kind involving leading Chinese technology companies, discovered by a US research company and shared with BBC News.

Huawei had previously said none of its technologies was designed to identify ethnic groups.

It now plans to alter the patent.

The company indicated this would involve asking the China National Intellectual Property Administration (CNIPA) – the country’s patent authority – for permission to delete the reference to Uighurs in the Chinese-language document.

Uighur people belong to a mostly Muslim ethnic group that lives mainly in Xinjiang province, in north-western China.

Government authorities are accused of using high-tech surveillance against them and detaining many in forced-labour camps, where children are sometimes separated from their parents.

Beijing says the camps offer voluntary education and training.

Huawei’s patent was originally filed in July 2018, in conjunction with the Chinese Academy of Sciences .

It describes ways to use deep-learning artificial-intelligence techniques to identify various features of pedestrians photographed or filmed in the street.

But the document also lists attributes by which a person might be targeted, which it says can include “race (Han [China’s biggest ethnic group], Uighur)”.

More than one company has been caught out, do read the January 13, 2021 news article in its entirety.

I did not do justice to the depth and breadth of the discussion. (I noticed I missed a few panelists and it’s entirely my fault; I should have woken up sooner. I apologize for the omissions.)

If you have the time and the inclination, do go to the Girl Trouble: Breaking Through The Bias in AI page on the UNESCO website where in addition to the panel video, you can find a number of related reports:

Happy International Women’s Day 2021.