How software and machines learn discrimination

Apparently, artificial intelligence can be influenced to take on prejudicial opinions which mirror some of the unsavory perspectives that an unfortunate amount of individuals have on a global level. How is this possible with machines that do not have inbuilt tendencies or biases?

For example, when one opens the photo application on their phones and types the search term ‘cat’, every picture that the user has of cats will show up. This is not an easy feat and it implies that the device has a notion of what a cat looks like. This is the result of a concept that has become quite significant known as machine learning.

They are programs that sift through huge quantities of data and make correlations and predictions about the environment or situation. These are machines which can use data to make decisions that sometimes may have more accuracy than that of a human being. However, machines are trained according to human data, which is biased, because of the nature of people themselves.

How software and machines learn discrimination

ProPublica published an investigation on a machine learning program the courts were using to predict the people that had the potential to commit second offenses after being booked in a systematic manner.

Advertisement

Reporters found the software rated black people as the ones with higher potential to commit a crime as compared to white people. The program learned about the ones that are most likely going to end up in prison from real world incarceration data.

Historically, the real world criminal justice system has not been fair to the black community. Now that these are systems, which are going to inform decision making, presents a big problem.

The story reveals a deep irony concerning machine learning. These systems main appeal is they have the ability to make accurate decisions free of human bias, or that is the advertisement. According to ProPublica, if computers could accurately predict which defendants were likely to commit new crimes, the criminal justice system could be fairer and more selective about who is incarcerated and for how long.

What has happened, instead, is that machine learning programs have now perpetuated the current biases on a large scale. Now instead of a judge being prejudiced, it is a robot or artificial intelligence which has critical reach and control over a variety of social systems.

Purpose and context

Researchers did not deliberately try to teach the next generation of artificial intelligence systems how to be prejudiced against African Americans or women, but they learned it themselves after an input straight from the Internet.

Scientists, such as Aylin Caliskan, fed their Global Vectors Word Representation algorithm with 840 billion words including articles from neutral sites including Wikipedia. It was then put to the test using a new version of the Implicit Association Tests (IAT).

With the use of the tool, scientists began to notice that GloVe started displaying human traits, including the relation of beauty to flowers and linking insects to unpleasant things. However, as one would expect, the artificial intelligence also made a big association between African American names and weapons, as well as, female names with chores and tasks instead of professional careers.

AI does not know Context

Bias when it comes to machine learning presents issues considering it could become used on platforms which serve individuals on a critical level. Law enforcement, for example, has used software to identify potential criminals though it just ended up profiling particular groups based according to the database that it was utilizing.

The debate is centered according to whether machines learn to be prejudiced because of language, or is the language loaded with stereotypes? Researchers are still trying to correct the issues, but it could help to include the human judgment of context along with the one provided for evaluation to these artificial intelligence programs.

[See More: Could Artificial Intelligence Achieve Singularity in 20 years?]

 

Comments

comments

Leave a Reply