Algorithmic Equity

 

What is Algorithmic Equity?

The motivation behind this project is to assist in making algorithmic equity a societal standard. This means working towards a future where

  • Communities are well-versed in algorithmic literacy;

  • Detection and mitigation of bias in preliminary phases become common practice;

  • Research is used to aid systematic social and global dilemmas more often than abstract academic puzzles.

How Algorithmic Equity Relates to Law Enforcement

Algorithms often carry all the biases and failures of human, but with even less judgment. Below are a few examples of technology implemented in law enforcement.

  • Predictive Policing: Police departments in some of the largest cities in the United States have been experimenting with predictive policing as a way to forecast criminal activity. Predictive policing uses computer systems to analyze large sets of data, including historical crime data, to help decide where to deploy police or to identify individuals who are purportedly more likely to commit or be a victim of a crime. Using historical data from the United States justice system is equivalent to using data that exclusively benefits individuals of European descent and discriminates against minorities. Thus, predictive policing not only produces biased outcomes that result in over-policing in minority communities but also creates a feedback loop that further reinforces the biased result.

  • Risk Assessments: Courtrooms across the country have implemented risk assessment algorithms, which calculates the probability of an individual committing a future crime, determines bail or bond amounts, and suggests reasonable terms for punishment or probation. Similarly to predictive policing, risk assessment algorithms use historical data as a learning foundation. With this information the program generates a number, the higher the number the greater the risk of future crime. However, the program has proven to yield risk assessments with higher scores for black and brown individuals despite the lack of precedence to justify it.  

  • Facial Recognition: Facial recognition is technology that is capable of identifying or verifying a person/entity from a digital image or video frame. The problem with facial recognition is two-fold and a double-edged sword: if it works too well, law enforcement and private technology companies have full range in mass surveillance and spying without the consent of the public; however implementing facial recognition that works too terribly, results in the mass generalization of different races and false positives for criminal profiles.

Recent blog posts

  • Github

©2020 by Algorithmic Equity