Racial and Gender Bias Found in Facial Recognition Systems

“In somebody’s database somewhere we are all being scored”: This is the not-so-surprising truth that most people live without being aware of the ethical implications. Now imagine being scored, targeted or discarded by a machine, just because of your skin color, your gender, or—not so surprisingly—both.

To know more about it, I watched CODED BIAS (2020). Here’s what I learned.

Wait, what is it about?

It is a documentary that shows how MIT Media Lab researcher, Joy Buolamwini, discovered racial and gender biases in facial recognition systems. Moreover, the ethical implications and the need for regulation of this and similar technologies that are being used today.


You lost me. What are facial recognition systems? 

It’s a technology that maps facial features from a photograph or video using biometrics; a database of known faces is used to find a match. Facial recognition uses artificial intelligence (AI) technology, more specifically, machine learning (ML), to capture, store, analyze and compare faces from different databases.


And if you’re not familiar with the concepts… 

  • Artificial Intelligence is the area of computer science that emphasizes the creation of systems that work and react like humans. 

  • Machine learning involves the programming of algorithms that can learn from themselves and even make their own predictions. In simple terms: Teaching a machine how to learn, make decisions and even predictions. 


And who is Joy Buloamwini?

1200px-Joy_Buolamwini_-_Wikimania_2018_01.jpeg

Joy Adowaa Buolamwini is a Ghanaian-American computer scientist and activist based at the MIT Media Lab. She is founder of the Algorithmic Justice League, an organization that seeks to create awareness of the social implications of AI. Note: She’s not against AI, she advocates for regulating AI technology, assuring equity and accountability.

More women of color in tech! 🙌🏿🙌🏾🙌🏽


Going back to the documentary…

We are constantly surveilled, analyzed, categorized and nudged to behaviors.

Sounds like a dystopian book, right? 😨 Although we usually don't realize it, this technology is almost everywhere in today's world. Phones, tablets, PCs, etc., collect our data to feed AI systems that can use this information for various purposes, for example, commercial ones. 


Is this a problem?

Not necessarily, this type of technology makes processes efficient. It has the potential to be used in different industries such as law, finance, healthcare, as well as in different areas like security, HR, marketing, among others. 

The idea sounds good when, let's say, you want to unlock your phone by simply putting it in front of your face, or when you want to verify your Bumble account, it requires facial recognition to do so. 


Sounds okay, so why should I worry? 

The concept of objectiveness has dominated understandings of technology for many years (I mean the creation of it, not necessarily the way we humans use it). When there is no objectiveness or neutrality in these systems, then there is a problem. 


Let’s talk about Joy Buolamwini’s research published in 2018. Results revealed how many leading machine-learning systems were unable to classify the darker faces of women as accurately as those of white men.

Percentage of accuracy in facial recognition:

Coded Bias (2020).

Coded Bias (2020).

A machine discriminating? Is that possible? 

Yes. Okay, so this technology is not evil or anything, it does what it is programmed to do. The problem occurs when programmers provide biased datasets to the system, which makes it completely inequitable for users.


An example? Please

Three! Taken from the documentary:

  1. Using facial recognition to detect criminals. Problem: When it misidentifies them and wrongfully accuses them. Accuracy is not yet proven, “when it came to black and brown individuals though, the rate of inaccuracy shot up. Darker-skinned women had a misidentification rate of 34%.” (Gal-dem)

  2. Using facial recognition to verify a person’s financial status, allowing or restricting him/her to access certain products and services (this system is used in China).

  3. Using deep learning (a subset of machine learning) in HR processes to identify the most qualified candidates. Problem: When it is programmed to discard women’s CVs.


💭 The ethical issues here are the following:

  • Are we replicating social issues in the digital world? (The different forms of discrimination).

  • Is this not yet regulated technology normalizing inequality? 

  • How do you assure equality under the “neutrality” of AI?

🗒 Let’s wrap it up!

AI is an innovative, highly efficient technology with numerous applications, and face recognition allows simplifying processes, but it is crucial to understand the ethical issues associated with it. Legal boundaries are needed when it comes to collecting our data and using it; technologies must be equal for everyone. There is an urgent need to create awareness of this situation, because if not, what would be the human cost?

Juliana Beltrán

Content Writer┃Bachelor in Business Management┃Book Enthusiast & Gelato Taster.

Previous
Previous

5 Podcasts About Women of Color In Tech You Need To Listen To Now

Next
Next

9 Startup Companies Led by Nordic Women of Colour You Should Know