Promises and dangers of using AI for employment: Guarding from data bias

Promises and dangers of using AI for employment: Guarding from data bias

[ad_1]

By AI Trend Staff

Currently, AI in employment is widely used to create job statements, screen candidates, and automate interviews, but if not carried out carefully, poses a wide range of risks of discrimination.

Keith Sonderling, Member of the United States Equal Opportunity Committee

That was a message from Keith Sonderling, the commissioner of the US Commission on Equal Opportunity. AI World Government It was essentially held live in Alexandria, Virginia last week. Sonderling is responsible for enforcing federal laws that prohibit discrimination against job seekers due to race, color, religion, gender, national origin, age or disability.

“The idea that AI will become mainstream in the HR sector was closer to science fiction two years ago, but the pandemic has accelerated the proportion of AI being used by employers,” he said. “Virtual recruitment remains here now.”

It’s a busy time for HR professionals. “A big resignation has led to major reemployment, and AI plays a role like we’ve never seen before,” Sonderling said.

AI has been employed for many years, saying, “it didn’t happen overnight.” Tasks like chatting with the application, predicting whether candidates will be at work, mapping what type of employee will become, and reskill opportunities for skilled and reskilling. “In short, AI is now making all the decisions that were once made by HR personnel,” he didn’t characterise it either good or bad.

“A carefully designed and properly used AI can make the workplace even more equitable,” Sonderling said. “However, carelessly implemented AI has been able to discriminate on a scale that HR experts have never seen before.”

The training dataset of AI models used for employment should reflect diversity

This is because AI models rely on training data. If the company’s current workforce is used as the basis for training, “it replicates the status quo. If it’s primarily one gender or one race, it replicates it,” he said. Conversely, AI can help reduce the risk of adopting bias due to racial, ethnic background, or disability context. “I want to see AI improve discrimination in the workplace,” he said.

Amazon began building employment applications in 2014 and found that it is recommending women over time, as AI models are trained on datasets of the company’s own employment records for the past decade, and are primarily trained on male recruitment records. The Amazon developers tried to fix it, but eventually dropped the system in 2017.

Facebook recently agreed to pay $14.25 million to resolve a US government civil claim that social media companies discriminate against American workers and violate federal recruitment rules. Reuters. This case focuses on Facebook using what is called the Perm program for labor certification. The government has found that Facebook has refused to hire American workers for jobs that were reserved for temporary visa holders under the Perm program.

“It’s a violation to remove people from the recruitment pool,” Sonderling said. If the AI ​​program “cannot exercise your rights because you have withholding employment opportunities for that class, or if you want to downgrade a protected class, it is within our domain,” he said.

Employment assessments, which have become more common than they have since World War II, provide higher value to HR managers and with AI support, could minimize employment bias. “At the same time, they are vulnerable to discrimination claims, so employers need to be careful and cannot take a handoff approach,” Sonderling said. “Inaccurate data amplifies bias in decision-making. Employers must be vigilant against discriminatory outcomes.”

He recommended researching solutions from vendors reviewing data for risk of bias based on race, gender and other factors.

One example is from Fried Having built an employment platform based on the Uniform Guidelines of the Equal Opportunity Commission in Southern Jordan, Utah, it is specifically designed to mitigate unfair employment practices. All jobs.

This is partially mentioned in the website’s post on AI ethical principles. “Because Hirevue uses AI technology in our products, we are actively working to prevent the introduction or propagation of biases to groups and individuals. We continue to carefully review the datasets we use at work to ensure they are as accurate and diverse as possible. A background with diverse knowledge, experience and perspectives to best represent the people our systems provide.”

We also “Our data scientists and IO psychologists build Hirevue evaluation algorithms in a way that removes data from considerations made by algorithms that contribute to the negative impact without significantly affecting the accuracy of evaluation predictions.

Dr. Ed Ikeguchi, CEO of EyeCure

The issue of bias in the dataset used to train AI models is not limited to employment. Dr. Ed Ikeguchi, CEO of AICure, an AI analytics company working in the life sciences industry, said in a recent account. Healthcareitnews“”AI is as powerful as FED data, and the reliability of the data backbone has been increasingly questioned these days. Today’s AI developers don’t have access to large, diverse datasets to train and validate new tools. ”

He said, “They often need to take advantage of open source datasets, but many of these were trained using computer programmer volunteers, primarily white populations. The algorithms appear to be very familiar when applied to a wider population in different races.

He also said, “Every algorithm needs to have governance and peer review elements. Even the most robust and tested algorithms must have unexpected results. The algorithm is not learned.It must always be developed and provided to improve more data. ”

And, “As an industry, we need to be more skeptical of AI conclusions and encourage transparency in the industry. Companies say, “How was the algorithm trained? How did you draw this conclusion?”

Read the source article and information AI World GovernmentfROM Reuters And then Healthcareitnews.

Check this Exclusive offer

Leave a Comment

Your email address will not be published. Required fields are marked *

Review Your Cart
0
Add Coupon Code
Subtotal