AI

AI can help you hire on LinkedIn, but it will likely be racist

Going through hundreds of CVs and cover letters can be a boring job. But are AI-powered recruitment assistants ready to replace humans?

LinkedIn has recently made a push to incorporate artificial intelligence (AI) into its platform by introducing the Hiring Assistant. The AI feature is promised to help headhunters with various recruitment tasks, including interacting with job candidates before and after interviews.

LinkedIn is not the only one seizing the AI momentum to automate filtering job candidates and assist in the job hunt. Tools such as RecruiterGPT, Moonhub, Tombo, and human resource-oriented GPTs have been growing in popularity.

AI models have been under fire for gender and racial biases, pushing companies into placing safeguards. Despite efforts, the AI chatbots are still pretty much tuned into Western society’s discourse and subtle systematic biases accrue, potentially leaving large numbers of non-Western people misrepresented.

For example, many systems developed in Western countries often lack guardrails that account for non-Western social concepts, such as caste in South Asia.

Shedding light on AI safety

In a new study, researchers from the University of Washington tested eight models – including two ChatGPT models from OpenAI and two open-source Llama models from Meta – to explore how bias might manifest.

The team developed the Covert Harms and Social Threats (CHAST) framework, based on social science theories, to classify these hidden harms. It includes seven metrics, such as “competence threats,” which undermine a group’s abilities, and “symbolic threats,” where outsiders are seen as a danger to the group’s values or morals.

“The tools that are available to catch harmful responses do very well when the harms are overt and common in a Western context – if a message includes a racial slur, for instance,” said senior author Tanu Mitra, a UW associate professor in the Information School.

“But we wanted to study a technique that can better detect covert harms. And we wanted to do so across a range of models because it’s almost like we’re in the Wild West of LLMs.”

According to Mitra, AI models can be used to build a startup and be used to complete a sensitive task, such as hiring, without fully understanding the guardrails the models have in place.

Nearly two thousand conversations showed how biased AI is

The researchers created 1,920 mock-up conversations based on hiring scenarios featuring Indian caste and race attributes. The recruitment experiment was focused on four occupations: software developer, doctor, nurse, and teacher.

The conversations were generated using the models’ default settings without ‘prompt attacking’ them to generate harmful content.

The results revealed that seven out of the eight models produced a significant amount of biased text during interactions, especially on caste-related topics.

Sixty-nine percent of conversations around caste and 48% of conversations overall contained harmful content. Open-source models performed notably worse compared to the two proprietary ChatGPT models.

“Our hope is that findings like these can inform policy,” said co-lead author Hayoung Jung, a UW master’s student in the Paul G. Allen School of Computer Science and Engineering.

“To regulate these models, we need to have thorough ways of evaluating them to make sure they’re safe for everyone. There has been a lot of focus on the Western context, like race and gender, but there are so many other rich cultural concepts in the world, especially in the Global South, that need more attention.

The findings are presented in the research paper “They are uncultured: Unveiling Covert Harms and Social Threats in LLM Generated Conversations.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button