Artificial intelligence technology is now used by a growing number of companies looking to hire the best employees, but new research from Rice University warns how it can incorporate biases and overlook important characteristics among job applicants.
The study explores the scientific, legal and ethical concerns raised by personnel selection tools that rely on AI technologies and machine learning algorithms. Authors Fred Oswald, a professor in the Department of Psychological Sciences at Rice University; Nancy Tippins of the Nancy T. Tippins Group, LLC, and independent researcher S. Morton McPhail reviewed the use of this technology.
Oswald says that AI technology – which includes games, video-based interviews and data mining tools - can save time in the job application process and the screening of potential employees. But he believes the effectiveness of these tools is questionable. For example, he says AI technology could overlook personality traits and job-related skills associated with successful performance, teamwork and improved diversity.
"To use games as an example, remember how kids avoid tests and love games?" Oswald says. "The same idea applies when hiring, where the hope is that applicants will be attracted to playing a game, and the game data will be at least as effective as a traditional employment test. No doubt games are engaging, but we need much more data to argue for the effectiveness of games as selection tools in hiring situations."
Using machine learning in the hiring process also raises concerns about accessibility and diversity.
"Take an example where job applicants go through a video interview, and their data are then scored by a machine learning algorithm," Oswald says. "It might pick up on job-relevant features such as responses concerning job knowledge or conscientiousness. But we are now highly aware that machine learning algorithms may also pick up on many incidental features irrelevant to the job, such as tone of voice, gestures and facial expressions."
Oswald points out that if an applicant is in a minority group or has a disability, the algorithms might not have as much data on these groups to understand and judge their unique talents, which could then limit diversity in the hiring process.
Finally, this research expresses serious ethical concerns about employers reviewing information that was not part of the employee's application package. In the past, job applicants could more carefully manage the materials reviewed by a potential employer, but now, machine technology can mine the internet for unrelated materials.
“Just because organizations can mine the internet for applicant information doesn’t mean that they should," Oswald says. "And related to this concern, we are now seeing how issues of applicant privacy and fairness are beginning to influence organizational policies as well as state and federal laws."
Oswald and his fellow authors hope the research will serve as a call to action for those building and using this technology to engage researchers to evaluate liabilities, risks and other associated problems.
"Scientific, Legal and Ethical Concerns About AI-Based Personnel Selection Tools: A Call to Action" appeared in a recent edition of Personnel Assessment and Decisions. The can be downloaded for free online at https://scholarworks.bgsu.edu/pad/vol7/iss2/1/.