Survey shows more than two-thirds of workers ‘cautiously optimistic’ about using AI for pay decisions
As artificial intelligence is rapidly integrated into workplaces and HR grapples with how employees adapt to the technology, a new report has good news for HR: more than half (68%) of employees are “cautiously optimistic” and think AI will increase fairness in compensation decisions.
The same number of employees said they’d trust pay decisions involving AI more than they would without, according to a Resume Now survey.
Currently, employers are using AI to help with compensation forecasting, including performance analysis, by processing large volumes of internal data. The technology can make recommendations for pay structures, bonus plans and retention improvement, ideally reducing the opportunity for bias and discrimination by reducing the human hand, and also helping HR leaders strategize more accurately.
While the results are encouraging, Markus Giesler, professor of marketing at York University’s Schulich School of Business, says that the challenge for employers is not just about making processes more efficient, but about transparency and justification.
“These algorithms are very efficient,” he says, “but on explainability, on justification, I need to be in a position to legitimately explicate and communicate into the workforce why I have made this assessment … when there's transparency, there's explainability.”
Risks, regulation, need for caution using AI
Resume Now’s survey highlights that the majority of respondents are worried about bias, transparency, and errors in using AI for pay decisions.
Valerio De Stefano, professor and Canada research chair in innovation law and society at York University, explains the legal risks for employers wading into the pool of AI-driven compensation systems.
There are particular risks, he stresses, when employers use third-party providers without full understanding of the technology or what’s being offered: “Companies, especially the ones that don't do this in-house but that outsource … they don't have a full control on what is going on in this software.”
While these systems may boast efficient and bias-free assessment of employee data, the metrics they use may not accurately reflect performance or effort.
They can also raise significant privacy concerns, he adds, and the problem is amplified by the fact that there is limited data or information on how employers are using the technology, so it’s difficult to get an accurate picture of what is happening.
“I am relatively sure that many employers are not aware of all these risks,” De Stefano says.
“Many employers are not aware of all the potential liabilities, and they should ask themselves, what are they doing, and why? Because, unfortunately, all these systems are being marketed very … powerfully, and it is easy to sell anything, as long as you put AI on the label.”
De Stefano points to the legal risk of discrimination if AI systems are not carefully managed, “oif the systems draw influences based on data that concerns protected characteristics in a way that is disproportionate or flawed.”
He urges employers to engage in self-reflection before deploying AI systems to interact with employee data, as the costs could outweigh the benefits.
“Do you actually need this system? And assuming that you have a need, can you get to that need in a way that may be less invasive?” De Stefano says.
“These are always questions that employers should ask.”
‘Human in the loop’: balancing human and machine judgment
Resume Now’s report shows that while employees are open to AI involvement, they also value human oversight and judgment; 65% of respondents worry about algorithm bias, 54% cite lack of transparency, and 45% fear AI replacing human judgment entirely.
Geisler stresses the importance of taking a cautious approach, especially when deciding on compensation – while the line where decisions should be human-made might be hazy, the problem is mitigated by ensuring there is always a “human in the loop”.
“Having the right blend between human and AI is something that a lot of companies are struggling with,” Geisler says.
“For instance, AI is very good at things like compliance and consistency. If you want the same consistent outcome over time and if you want compliance, an algorithm can do that more effectively than a human actor. But in HR especially, we need humans for things such as judging a human employee's personality or judging the context within which a particular human operates.”
The right balance of human versus AI decision-making and analysis depends largely on the nature of the work and the role itself, Geisler adds, but regardless of context, human judgment should definitely still be involved – at least for the final word.