Skip to content

Bias in AI: A Problem or a Potential Solution?

By Tarun Amasa

Our actions in a single day are a construct of thousands of decisions, each determined by our experiences, tastes, and predetermined biases. Every choice is a complex combination of conscious and subconscious thought and the determinants that lie within. In current literature, psychologists have begun to inspect these thoughts, finding that we are fueled by unconscious bias from a survival and evolutionary standpoint. Research projects such as Project Implicit hosted by Harvard University have shed light on the biases of individuals when it comes to race, gender, ethnicity, sexuality, and other personal traits. The results are striking, finding that even the most altruistic of individuals often possess some level of bias, regardless if they are aware of it or not. 

Such thoughts pose intellectual conversation surrounding the construction of Artificial Intelligence and the bias it may possess. In traditional machine learning settings, a human classifies data to “train” an algorithm to determine if a specific input falls into a specific class or another. Such a process is relatively harmless when determining if an image contains a picture of a flower or a car, but quickly goes out of hand when taken to more nuanced subject material. When delving into facial recognition, crime data, or other swathes of personal information, MIT Tech Review reports that these datasets are often artificially skewed by the eyes of the researcher or evaluator. Such a result is multiplied when machine learning algorithms are used to make decisions, which compounds in future adverse outcomes. Such an example is present in the incarceration rates in the United States, where the NAACP estimates that African Americans are 5 times more likely to be arrested for committing the same crime. If these raw incarceration statistics were used to train a model that determines the outcome of a court case, the bias would result in these inequitable outcomes. While it is increasingly clear that such disparities in technologies exist, tackling these biases is a difficult issue. 

The crux of the problem is that humans will always be biased, and any technology that primarily relies on their judgement will result in a reflection of these observations. Therefore, the solution must be to minimize it to the highest degree with intentional efforts. 

This is easier said than done, as algorithm builders are often not aware of their bias. Thus, a key checkpoint for this step can be achieved by completing an implicit bias test, similar to Project Implicit described above. The second piece of the puzzle relies on hiring a diverse pool of human evaluators, ranging in age, sexual orientation, race, religion, and more, to reduce the gaps in the initial training dataset. 

The third, and perhaps most important part of the puzzle, is deploying the newly trained algorithm to ‘correct’ past results. With the scalability, cost effectiveness, and smaller implementation barriers of these algorithms, the less-biased system could act as a catalyst to reverse bias that exists in the current world. Inequitable justice systems and hiring outcomes can be investigated through a novel artificial intelligence lens, and the subsequent findings can work as an important vector for reform. Thus, the paradigm can ultimately shift from reducing bias in AI to instead using properly trained AI systems to alleviate existing ones.