Artificial intelligence is ripe for a huge breakthrough in the future, especially with machine learning at the helm. When this happens, anyone with the right amount of knowledge could abuse an AI to target certain populations.

For example, regimes around the world that prefer to subject citizens to strict obedience tactics could take advantage of artificial intelligence in many ways. It’s something the technology industry does not want to see come to fruition, but they might not be able to stop it.

Kate Crawford from Microsoft Research believes AI is coded with human biases without the knowledge of many. She believes these things will tend to happen with fascist, right wing, and communist movements around the world, seeking to demonize outsiders and track the movements of citizens.

“Just as we are seeing a step function increase in the spread of AI, something else is happening: the rise of ultra-nationalism, rightwing authoritarianism and fascism,” according to Crawford during her SXSW session, titled she said in Dark Days: AI and the Rise of Fascism.


Controlling a populace could be as easy as having the right person to code artificial intelligence in a way that would benefit the government.

Example of bias coding

Authors from the China’s Shanghai Jiao Tong University claimed they have created a bias-free artificial intelligence system trained to recognize Chinese government ID photographs. Here’s the thing, the AI could use photos to tell if a person is a criminal. Apparently, the data shows that criminal faces are more unusual when compared to citizens who abide by the law.

If your physical appearance differs from what is normal, law enforcement could have a problem should they go by what an AI entity has to say.

Crawford is very concerned about authoritarian regimes having the power to create human registries for targeting particular populations.

This type of AI is already in U.S. hands

An AI system designed for the aid in mass deportation is already a thing in the United States. A company known as Palantir has been working on it since 2014. What’s interesting here, is the fact that company founder, Peter Theil, is an advisor to President Donald Trump.

Interestingly enough, Crawford believes the predictive policy has failed because several types of research show it results in unfair treatment of minorities.

Hey, what about the political left?

Crawford said nothing about the left and how that section of the political sphere, could use AI in bias ways. One should keep in mind that all political organizations use whatever means to push forward their bias agendas, henceforth, bias AI coding is not exclusive to the right wing, the communist, and the fascist. It’s also a problem we should expect from the left and Crawford’s failure to recognize that shows her own bias.





Please enter your comment!
Please enter your name here