Diversity, Equity & Inclusion

 View Only

AI Challenges During the Gender Equality and Diversity Era: Do We Need a Legal Frame?

By Cecilia Celeste Danesi posted 02-26-2021 13:51

  

Please enjoy this blog post authored by Cecilia Celeste Danesi, AI and Law researcher and professor (IG @ceciliadanesi, LK Cecilia Celeste Danesi, www.ceciliadanesi.com). 

Nowadays we can find artificial intelligence everywhere: to fight against the Covid-19, to predict our music preferences, to grant a loan or a university vacancy…just to name a few. But why is AI everywhere to be found? And, yet most importantly, does AI have any risk? Can it affect our rights? Let's see.

AI pertains to the main field of Data Science and it also has other subareas, such as machine learning and deep learning. In simple words, AI consists in making future predictions taking into account past data. The dataset is processed by an algorithm (a sequence of logical steps, like a cooking recipe) which will make a prediction. Just to give an example, since the pandemic started, one of the uses of AI has been to predict Covid-19. To do that, the system was trained with chest x-rays and radiologists' reports and the prediction was coronavirus “negative” or “positive”.

Thanks to its capacity for processing a big amount of data, the speed to do so, its accuracy and the possibility to relieve human beings of routine tasks; AI has become the protagonist of the 4th Industrial Revolution. But… not all that glitters is gold. Let's talk about algorithmic bias.

The best way to understand what this means is, with some sad and real examples. In 2014, Amazon created an AI system to review job applicants’ CVs based on the company’s last 10 years of files. One year later, Amazon realized that this tool was discriminating against female candidates applying for IT positions, and was choosing men instead. This was happening because the AI system had been trained with biased information. During those years most applicants and hired employees had been men. When the company found this out, they stopped using the system.

Another case was that of the “Apple Card”, the first card issued by the well-known brand. A consumer and well-known businessman, David H. Hansson, openly criticized Apple after he and his wife were both given the green light to use the card. They had identical credit scores. However and surprisingly, her spending limit was actually 20 times lower. This opened the debate in social media around the opacity of algorithms.

We can’t help but mention social media since algorithms are the managers of the content, with their clear bias. Microsoft’s chatbot was one of the most relevant cases. It got information from Twitter posts and, only 24 hours after its launch, had to be shut down because it started referring to feminism as a “cult” and a “cancer.” Twitter, in turn, hit the news last year, when it came to light that its algorithms chose white faces as a relevant part in the miniature´s pictures. Along the same lines last year, Instagram algorithm blocked a Celeste Barber photo in which she  was imitating a photo of a naked Victoria's Secret model , that had obviously not been censored when it was posted by the model herself.

On top of that, one of the worst issues of algorithm bias is the facial recognition tools which have been implemented as part of the surveillance video of the governments in Argentina, China, the United States - just to mention a few. In the case of Argentina, the system can detect if a person who is walking around the subway has an APB out (a request to find and arrest). Bias in this kind of system can be very dangerous because a person can be pinpointed as a fugitive and be arrested when he could just be an innocent citizen. This is the case of African American Robert Julian-Borchak Williams, who was under arrest for 30 hours due to a facial recognition algorithmic fail in Michigan. This issue is well explained in the “Coded Bias” documentary, which starts with the case of Joy Buolamwini -a researcher at the MIT Media Lab- who discovered that most facial recognition software does not recognize darker-skinned or female faces. To be recognised, she needed to wear a white face mask.

All these examples show us that algorithmic bias is real. This is happening because of two reasons. On the one hand, AI experts bias. Normally, the groups in which AI is developed have no diversity and no women participation. Moreover, data specialist, engineers, and exact sciences graduates have no training in ethics and legal aspects. They are not aware of the damage that an AI system can cause to human rights. On the other hand, the problem lies in the data. The dataset which is used to train the algorithm is biased. The report “Tackling Social Norms: A game changer for gender inequalities” published in March 2020 by the UN Development Programme (UNDP) teaches that despite decades of progress in closing the gender equality gap, almost 9 out of 10 men and women around the world hold some sort of bias against women. This kind of bias is brought to the system, which learns the bias pattern and replicates it in its predictions. In Cathy O'Neil words, the author of Weapons of Math Destruction, this situation is called “pernicious feedback loop”. As we can see, AI has the great power to reinforce and widen the gender equality gap.

What could be the solution? To begin with, the problem is not technology but how it is used. We also need to remember that Human Rights are in force so that any breach has legal consequences. Apart from that, we have a lot of ethical principles that address AI issues but, as they are not mandatory, they are not effective. Therefore, we need a legal framework which guarantees our rights but at the same time does not stop technology innovation.

The European Union is working hard on it. The European Parliament has adopted three legislative-initiative reports in October 2020: “A framework of ethical aspects of artificial intelligence, robotics and related technologies”; “Civil liability regime for artificial intelligence” and; “Intellectual property rights (IPRs) for the development of artificial intelligence technologies”.

The first one translates most of the ethical principles that are in the “Ethics Guidelines for Trustworthy Artificial Intelligence” (EU, 08/04/19) into a legal obligation; and proposes the creation of a European certificate of ethical compliance that high-risk technologies should comply with.

The requirement of an “ethical certificate” for high-risk technologies is a great plan to deal with bias. It implies an exhaustive evaluation of the system that has to be in all the AI life cycle. In the same way, the US bill “Algorithmic Accountability Act of 2019” would authorize the Federal Trade Commission to create regulations requiring companies to assess their automated decision-making systems for accuracy, fairness, bias, discrimination, privacy and security.  

To conclude, we have to wonder, what are the companies doing about this? Are they committed to algorithmic bias? A case in point:  it is enough to mention the decision of Google to fire Timnit Gebru in December and Margaret Mitchell this month; both women researchers in Google’s ethic AI department. Gebru was co-creator of “Black in AI”, a black researchers community that works on AI. She was fired because she refused to take back her opinion from an academic paper about the huge energy waste of training AI models and bias inside systems that are trained with words found on the internet. Mitchell was one of the extensive list of people who complained about Gebru's dismissal. She has also been fired.


#GlobalPerspective
#DiversityEquityandInclusion
#ArtificialIntelligence
0 comments
38 views

Permalink