How Artificial Intelligence Will “Learn” to be Racist.

From our daily use of our navigation applications to marveling over the cognitive power of AlphaGo, the presence of artificial intelligence is undeniably deep and will only continue to ingrain itself in our everyday life. The idea of programs intrinsically “teaching” themselves concepts, “learning” from datasets, and formulating conclusions autonomously is nothing short of groundbreaking. However, a problem arises when considering what exactly these programs are learning. While modern America’s battle with racial oppression is glaringly obvious in societal spaces, people tend to see new technological advances as a realm of colorblindness and neutrality. This is a dangerous misconception. The programs being created are often learning datasets and social standards that were defined throughout American history-- a history centered around the perspective of white Americans-- and this, of course, heavily influences the conclusion these programs come to. And the fact this biased artificial intelligence is being used for purposes ranging from judicial and leisure services is where a grave problem presents itself.

Enforcing institutional racism is one of the most severe crimes committed by the United States justice system, which, historically, have judged Black and Brown Americans far more harshly than White Americans. Justice seems blind to the lives of countless African Americans and Latinx individuals who are given punishments for crimes that are unproportional in justification. These crimes, along with each individual's ethnicity and racial background, are extensively recorded and placed into the federal database. This presents a disgusting revelation in the era of automation and machine learning.

The use of artificial intelligence is being incorporated into the courtroom with the intention of creating a more “color blind” and equal justice system. A computer algorithm was created to produce risk assessments, which calculates the probability of an individual committing a future crime, determines bail or bond amounts, and suggests reasonable terms for punishment or probation. The computer program was instantiated by the use of “deep learning” where it would essentially “teach” itself how to determine a sentence based off the analysis of thousands of thousands of previous court cases. With this information the program generates a number, the higher the number the greater the risk of future crime. However, the program has proven to yield risk assessments with higher scores for black and brown individuals despite the lack of precedence to justify it.

For example, in July of 2014, black, eighteen-year-old Brisha Borden was arrested for stealing a bicycle and charged with burglary and petty theft for items valued at $80. For comparison, the previous summer, white, forty-one-year-old Vernon Prater was admitted for shoplifting 86 dollars worth of merchandise from Home Depot. Despite the fact that Prater had already acquired a criminal record consisting of three armed robbery charges and had served five years in prison, the program generated a higher score for Borden than Prater. On a scale from one to ten, ten indicating the highest risk for future crime, Borden received an eight and Prater received a three. Prater would go on to steal electronics worth thousands of dollars from a warehouse and serve eight years in prison which juxtaposes Borden who never committed another crime.

This is not uncommon. The formula for which the program was based produced higher scores for black and brown individuals two times more than white individuals, who were often mislabeled as “low risk”. This inaccuracy stems from the data given to the program, which is employed in courtrooms throughout the country. The program “learned” to judge individuals on data corrupted by court cases influenced by bias rhetoric and historic systematic racism.

A more recent example comes from the release of FaceApp in January 2017. The popular application was designed to display different facial modifications, allowing the user to look younger, older, or even of another gender. Another feature of the app was to “beautify” an individual’s face, however, the app would typically refashion any face by lightening the skin, narrowing the nose, and rounding the eyes -- Essentially it would transform the face to represent the European idea of the word “beautiful”. The CEO, Yaroslav Goncharov, commented that the data used to train the algorithm of the app was “an unfortunate side-effect of the underlying neural network caused by the training set bias”.

The era of automation gives rise to many dynamic and inspirational possibilities, however it also brings into question what exactly artificially intelligent programs are learning. They are “learning” from data produced by racism notation and have the very real and very dangerous potential to further enforce racism. The necessity for People of Color to immersive themselves in Computer Science fields is greater now, more than ever. There will never be programs that are truly neutral and equal until all narratives, not just a white narrative, have a hand in their creation. What will happen when robotic policing becomes a reality in America? This is realistically in the foreseeable future considering the Dubai police force have revealed the first ever “robocop”, which “can help identify criminals and collect evidence”.

Who is considered a criminal in America? If People of Color do not have a hand in creating the technology that shapes the future, we will find ourselves subject to another level of unjust, insulting, and dangerous oppression.


[Created November 29, 2017]

  • Github

©2020 by Algorithmic Equity