Today, algorithms are a part of our everyday life and whether we want it or not, our reality is shaped by invisible and intelligent systems. Because Artificial Intelligence (AI) can be an extremely efficient tool for getting data-driven predictions. Which increasingly is used for recruitment purposes. But what is the outcome if we feed a system with biased data? Well, it creates an unfair AI that inadvertently discriminates against already marginalized groups.
Personal data has become one of the most valuable commodities during the last decade because it is an effective tool to predict future behavior. But according to a report written for the Anti-discrimination department of the Council of Europe, there are many risks of discrimination caused by algorithmic decision-making. In combination with the lack of moral and ethical guidelines around AI technologies, it has made room for systems that can discriminate and reinforce social inequality.
Even though the software can’t be biased in itself, if we provide data from today’s (unequal) society, the algorithm will predict a future that reflects that. Thus, making the rich richer, and the poor poorer. Because when we import big amounts of data and let algorithms analyze it to make a prediction, we create systems made for “profiling” people. And when this kind of AI is used as a legitimate ground to make important decisions about people, the results can be detrimental.
There are many examples of systems that have predicted unfair or even discriminatory results and it is crucial to understand that there always is potential for bias to creep in over time. Because who is determining if the system is doing well after a couple of years?
One example is from the public sector where a system actually reinforced social inequality. The notorious “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS). Which was used to predict if defendants would re-commit a violent crime. A great idea in theory, but when the results were analyzed the algorithm repeatedly discriminated against one group. The system misclassified black defendants to be twice as likely as white defendants to commit a crime again. Even though the likelihood couldn’t be proved. The incorrect prediction was based on a mathematical miscalculation.
“The data showed that black defendants were twice as likely to be incorrectly labeled as higher risk, compared to white defendants”
Another well-known example is from the private sector. Amazon stopped using an AI system for screening job applicants because the system was biased against women. It became clear that the algorithm wasn’t rating candidates in a gender-neutral way. Instead, the results were based on historical training data and the data showed a majority of successful men. Therefore, the system taught itself that male candidates were preferred.
A research study from 2018, written by Lauren Rhue, has indicated that algorithmic bias is most prevalent in the facial scans of non-white people. She found that two different facial-analysis programs determined that images of Black NBA players were respectively interpreted to be more angry or contemptuous than those of white players. In 2021, Ifeoma Ajunwa, an associate professor of law at the University of North Carolina’s Law School, explored how the potentially discriminatory effects of AI-based video interview technology and came to the conclusion that video interviewing is based on shaky or non-proven technological principles that disproportionately impact racial minorities.
There are also organizations that promote ethical AI, such as the Algorithmic Justice League, which combines art and research to illuminate the social implications and harms of AI. Its mission is to highlight unchecked, unregulated, and, at times, unwanted, AI systems that can amplify racism, sexism, ableism, and other forms of discrimination. Their documentary "Coded Bias", illuminates our mass misconceptions about AI and emphasizes the urgent need for legislative protection.
With AI playing a more important role in organizations, it’s becoming clear that we need moral and ethical guidelines. Therefore, transparency and testing AI will be key to building predictive systems that won’t reproduce or even amplify discrimination. By having diverse and cross-disciplinary teams you can get a better evaluation and increase the chance of creating a less biased system.
Tengai is an innovative AI interview that combines conversational AI and an unbiased recruitment methodology. Tengai’s sole purpose is to assist recruiters and hiring managers to make objective assessments. Even though Tengai has certain human qualities to be relatable, she lacks the cognitive ability to judge. By eliminating gut feeling completely, a candidate’s age, sex, appearance, and dialect become irrelevant. And while the goal of a human meeting is to charm one another, Tengai cannot be charmed since she only is measuring competency.
At Tengai, we recognize that our software can have an impact on individuals and society as a whole and we take this responsibility seriously. One critical aspect of our responsible AI development is, therefore, identifying and mitigating bias. To ensure that Tengai's framework is 100% unbiased, we asked psychometric experts to test the interview and validate the assessment. The results show that Tengai can conduct objective interviews, assess work performance, and contribute to a more unbiased interview process.
Book a demo to learn how Tengai can help you create a more unbiased screening process.