Racism in AI: A Systemic Issue?
Grade 12 Student /  Sun, 20 Feb 2022


When many people think of artificial intelligence (AI), they think of robots or self-driving cars. AI often elicits a sense of amazement, of wonder, of possibility for the future. But just as AI offers advancements, there are also potential consequences — machines are prone to bias, including racism. As Black History Month comes to an end, it’s important to recognize the racial issues in our tech-fueled world, where AI algorithms are present from security cameras to sending a text to your friend. Any bias in these algorithms can result in massive, systemic issues that disproportionately affect Black individuals.


These machines learn by running training data through algorithms, each crafted by hundreds of researchers and engineers. This is where the bias comes into play: if the data being inputted, cleaned and formatted is biased, the results also will be.


James Zou, assistant professor of biomedical data science and computer and electrical engineering at Stanford University told me, “These algorithms, you can view them sort of like babies who can read really quickly…You are asking the AI baby to read all these millions and millions of websites … but it doesn't really have a good understanding of what is a harmful stereotype and what is the useful association.” Just like a baby constantly looking for trends and patterns, these algorithms learn similarly through massive datasets.


One example of this is the COMPAS algorithm, which predicts whether a defendant in a United States court is likely to be a repeat convict. If Black people are more likely to be arrested in the United States due to historical racism and disparities in policing practices, this will be reflected in the algorithm’s training data. As such, AI systems that predict the likelihood of future criminal acts would also be biased and discriminatory against Black people, as they would be looking for trends within the already biased and overrepresented dataset. As the world slowly moves towards a total-surveillance-based system, there are massive concerns and consequences of using these algorithms to detect and monitor crime; potentially propagating the systemic racism Black individuals have already faced for hundreds of years.


At the same time, there are often cases where the dataset is actually underrepresented with Black faces and skin tones. According to researchers at the Georgia Institute of Technology, state-of-the-art detection systems (including the sensors and cameras used in self-driving cars) are better at detecting people with lighter skin tones, because these algorithms were trained primarily with lighter skin tone datasets. When these sensors came into contact with an individual with a darker skin tone, they had difficulty detecting whether it was a human being or not. This bias could result in dangerous outcomes, including fatal injuries towards BIPOC individuals due to under-representation within the dataset.


This is quite the controversial debate within the AI community, mainly because the machines and algorithms aren’t racist to begin with. They become racist by learning from their environment. These algorithms are merely replicating and learning what they see on the internet and datasets; the issue is the human aspect. In the future, we must be cognizant of the ethical ramifications of these algorithms, and minimize their bias as much as possible. Although there isn’t yet a conclusive approach to treat “AI-Racism” within the tech community, these steps will push us towards solving this systemic issue.