The world is changing and we’re relying more and more on Artificial Intelligence (AI) to help us make decisions. Today, algorithms are a part of our everyday life and whether we want it or not, our reality is shaped by invisible and intelligent systems. AI can be an efficient tool to get data-driven predictions. Which increasingly is used for recruitment purposes. But what is the outcome if we feed a system with biased data? Well, it creates an unfair AI that inadvertently discriminates against already marginalized groups.

 

Your software is only as fair as the data you provide

Personal data has become one of the most valuable commodities during the last decade because it is an effective tool to predict future behavior. But according to a report written for the Anti-discrimination department of the Council of Europe, there are many risks of discrimination caused by algorithmic decision-making. Because today, we can import big amounts of data and let algorithms analyze it to make a prediction. This is also called “profiling” and it is used as a legitimate ground to make important decisions about people. But the lack of moral and ethical guidelines has made room for systems that discriminate and reinforce social inequality. Even though the software can’t be biased in itself, if we provide data from today’s (unequal) society, the algorithm will predict a future that reflects that. Thus, making the rich richer, and the poor poorer.

The importance of an ethical software

There have been several examples of systems that have predicted unfair or even discriminatory results and it is crucial to understand that there always is potential for bias to creep in over time. Because who is determining if the system is doing well after a couple of years?

“The data showed that black defendants were twice as likely to be incorrectly labeled as higher risk, compared to white defendants” 

With AI playing a more important role in organizations, it’s becoming clear that we need moral and ethical guidelines. Therefore, transparency and testing AI will be key to building predictive systems that won’t reproduce or even amplify discrimination. By having diverse and cross-disciplinary teams you can get a better evaluation and increase the chance of creating a less biased system.

Reinforcing social inequality

One example is from the public sector where a system actually reinforced social inequality. The notorious “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS). Which was used to predict if defendants would re-commit a violent crime. A great idea in theory, but when the results were analyzed the algorithm repeatedly discriminated against one group. The system misclassified black defendants to be twice as likely as white defendants to commit a crime again. Even though the likelihood couldn’t be proved. The incorrect prediction was based on a mathematical miscalculation.

Another well-known example is from the private sector. Amazon stopped using an AI system for screening job applicants because the system was biased against women. It became clear that the algorithm wasn’t rating candidates in a gender-neutral way. Instead, the results were based on historical training data and the data showed a majority of successful men. Therefore, the system taught itself that male candidates were preferred.

Candidate-screening-blog-

Research on algorithmic bias 

A research study from 2018, written by Lauren Rhue, has indicated that algorithmic bias is most prevalent in the facial scans of non-white people. She found that two different facial-analysis programs determined that images of Black NBA players were respectively interpreted to be more angry or contemptuous than those of white players. In 2021, Ifeoma Ajunwa, an associate professor of law at the University of North Carolina’s Law School, explored how the potentially discriminatory effects of AI-based video interview technology and came to the conclusion that video interviewing is based on shaky or non-proven technological principles that disproportionately impact racial minorities.

There are also organizations that promote ethical AI, such as the Algorithmic Justice League, which combines art and research to illuminate the social implications and harms of AI. Its mission is to highlight unchecked, unregulated, and, at times, unwanted, AI systems that can amplify racism, sexism, ableism, and other forms of discrimination. Their documentary "Coded Bias", illuminates our mass misconceptions about AI and emphasizes the urgent need for legislative protection.

Find more AI research here > 

save-time-and-money-with-interview-screening

Tengai's approach to ethical AI 

Tengai is an innovative AI interview that combines conversational AI and an unbiased recruitment methodology. Tengai’s sole purpose is to assist recruiters and hiring managers to make objective assessments. Even though Tengai has certain human qualities to be relatable, she lacks the cognitive ability to judge. By eliminating gut feeling completely, a candidate’s age, sex, appearance, and dialect become irrelevant. And while the goal of a human meeting is to charm one another, Tengai cannot be charmed since she only is measuring competency.

To ensure that Tengai's framework is 100% unbiased, we asked psychometric experts to test the interview and validate the assessment. The results show that Tengai can conduct objective interviews, assess work performance, and contribute to a more unbiased interview process.

Book a demo to learn how Tengai can help you create a more unbiased screening process >

Submit a comment

You may also like

10 ways unconscious biases affect the job interview
10 ways unconscious biases affect the job interview
16 May, 2019

Humans are innately subjective, filled with unconscious bias and we allow our emotions to navigate us when we don’t unde...

4 interview mess-ups that can ruin your employer brand
4 interview mess-ups that can ruin your employer brand
27 January, 2020

  There has been a power shift in the job market and the competition to find new talent is becoming harder. Today, we se...

Digital interviews raise concerns
Digital interviews raise concerns
22 March, 2021

In 2020, the new coronavirus paralyzed the world and made over 7 billion people isolated in their homes, creating a new ...