You are currently viewing Types of Algorithmic Bias

Types of Algorithmic Bias

There are various kinds of algorithmic bias, rather, there are various ways in which algorithmic bias manifests itself. This means that there are three avenues by which algorithmic bias can occur, and therefore must be addressed.

Pre-existing

Pre-existing algorithmic bias is the codification of already-present biases (1). If a system designer has a real prejudice that they want to implement into a technological solution, that would be an instance of pre-existing algorithmic bias. Another such instance would be the inclusion of an implicit bias into the system, something that the system designer is not cognizant of as a cause of discrimination. Pre-existing algorithmic bias means a bias that would exist regardless of the algorithmic solution, but the algorithm incorporates that bias into its processes.

Technical

Technical algorithmic bias is the bias that occurs due to the technical limitations of actually presenting the data (2). If an employer is presented top candidates for a position in a structured order not based on scoring, then candidates are either going to be advantaged or disadvantaged- the first name on a list of top candidates will have a significant advantage over those at the bottom of the list. Another example could be that the data gathering mechanism is most robust on the most advanced phones on the market, and the others have a reduced data-set. Technical bias problems reflect a serious difficulty in being objective in presenting results.

Emergent

Emergent algorithmic bias is the development of new biases or new understandings of biases as technology develops (3). For example, if audiobooks became so popular a method of consuming literature that published books were made obsolete, then the deaf population would be negatively impacted. A different example would be the development of a new trend in society which has not been accounted for in creating processes for sorting big data — such as a demographic survey not reflecting third options for gender identifiers following a better social awareness around gender identities.

Current Examples of Algorithmic Bias

Algorithmic bias isn’t a theoretical problem, there are instances of actual bias being implemented right now, and algorithmic bias can often fall under the protections of the disparate impact legal doctrine.

Chicago Police Department’s Heat List

The Chicago Police Department has turned towards a predictive policing initiative in order to reduce gun violence (4). The CPD is using a technology that takes various, unknown factors into account, runs those factors through an algorithm, and scores people as part of a “heat list.” Oftentimes, those on the heat list are then directly communicated with by the CPD to notify them that they are on the CPD’s radar as people to watch. The people on the heat list are either invited to a community meeting, notified through communications, or are told in person at their homes. The software’s variables as well as the maker of the software are completely unknown and unexamined., all that has been revealed is that criminal history, known criminal associates, and whether you have been the victim of a crime are somehow included in the process. Given the serious nature of the consequences of a computer program determining who is most likely to become a criminal, it was inevitable that a lawsuit would emerge in order to determine the underlying processes for the program. The Chicago Sun-Times has filed a lawsuit in Cook County’s Court of Chancery under the Freedom of Information Act to find out the nature of the algorithm, the maker of the algorithm, and the race of each person on the list, among other factors (5). The CPD refused the initial FOIA request, claiming it would be “unduly burdensome” to provide those details. Clearly, the Chicago Sun-Times is suspicious of the discriminatory impact that could face Chicago residents due to this program. 

Statistics have shown that Black Communities have a higher instance of poverty and crime, and if being a known associate of someone who has been convicted of a crime is a factor in these heat lists, then black communities will be disproportionately singled out by these heat lists. If it cannot be shown that there is a high accuracy of identifying criminals beforehand, then the undue attention to these identified citizens should be stopped. However, there is a larger problem. Perhaps police officers are first looking at suspects who are on the heat lists, or even identifying suspects solely through the heat lists, this could lead to a conviction or even just a guilty plea because an undue amount of trust is placed in these systems, and police officers end up attempting to rationalize a person’s activity into being part of a given crime. Then, the accuracy of the system goes up, feeding into the confidence of officers in the system and further punishing communities who have been initially identified by the system. 

Credit Scores

The same problems with confirmation bias can occur within other contexts. Credit scores often reflect a class division along racial lines (6). Some credit scoring systems look to the personal relationships of a particular person in order to determine whether they associate with people who pay back their loans on time. The more people they associate with who are good borrowers, the higher the inferred likelihood that the person will also be a good borrower. However, many minority groups are more likely to have a lower credit score (7). A member of a minority is likely to begin with a lower credit score based on being a member of that group because one’s associations are likely to be largely composed of one’s own ethnic group, even though the score doesn’t directly consider race as a factor. 

Air BnB

Discriminatory practices can take a more subtle form, when the discrimination is merely being enabled by the provider. Such as the case with AirBnB, where users were discriminatorily rejecting accommodation to users on the basis of race- unintentionally enabled by the online platform. AirBnB and others in the same position could easily be motivated to serve the discriminatory interests of its user base – happy users leads to more sales and more revenue. 

Facial Recognition Technology

Facial recognition technology has a difficult time identifying black people (8). In addition to disrupting face swaps on Snapchat, it causes foundational problems for technology that utilizes facial recognition. For example, as self-driving cars loom, society has grappled with the problem of who the car will opt to save in case of a crash (9). It is a given that the car will make decisions based on many factors, including the number of people at risk given a particular course of action. If the car is unable to recognize the actual number of people at risk because it can’t recognize black faces riding in a passenger vehicle, then the number of black people who die in unavoidable car accidents will be higher than lighter skinned people.


  1. http://www.nyu.edu/projects/nissenbaum/papers/biasincomputers.pdf
  2. http://www.nyu.edu/projects/nissenbaum/papers/biasincomputers.pdf
  3. http://www.nyu.edu/projects/nissenbaum/papers/biasincomputers.pdf
  4. http://time.com/4966125/police-departments-algorithms-chicago/
  5. https://drive.google.com/file/d/0B1_UcIgpv9WHUk1fT1FNd09na1RjMHJUUkowZloxaHVBQlg0/view
  6. https://www.theatlantic.com/technology/archive/2016/12/how-algorithms-can-bring-down-minorities-credit-scores/509333/
  7. https://www.federalreserve.gov/boarddocs/rptcongress/creditscore/creditscore.pdf at O-13
    • “Differences in credit scores among racial or ethnic groups and age cohorts are particularly notable because they are larger than for other populations. For example, the mean normalized TransRisk Score for Asians is 54.8; for non-Hispanic whites, 54.0; for Hispanics, 38.2; and for blacks, 25.6 (figure )-1). Credit scores by age increase consistently from young to old: The mean TransRisk Score for individuals younger than age 30 was 34.3; for those aged 62 or older, it was 68.1”
  8. http://www.pbs.org/wgbh/nova/next/tech/ai-bias/
  9. http://science.sciencemag.org/content/352/6293/1573