Top employers look to mitigate 'algorithmic bias'

IBM, GM, Walmart join alliance focused on trust, transparency in HR processes

Top employers look to mitigate 'algorithmic bias'

A group of leading employers is looking to mitigate the “data and algorithmic bias” in human resources and workforce decisions, including recruiting, compensation and employee development.

The Data & Trust Alliance brings together leading institutions to learn, develop and adopt responsible data and AI practices. Members share a common belief that data and intelligent systems will be critical for creating economic and societal value in the coming era, but must be deployed responsibly.

"As businesses transition from 'going digital' to becoming 'data enterprises’, it is imperative to unlock the value of data and AI in ways that earn trust with every stakeholder," says Doug McMillon, president and CEO of Walmart. "Developed and used responsibly, these systems hold the promise of making our workforces more diverse, more inclusive, and ultimately more innovative."

Trustworthy AI is crucial for businesses, says Arvind Krishna, chairman and CEO of IBM, and the company is looking forward to working with the alliance “on making responsible data and AI practices the norm, not the exception.”

Recently, New York City announced new rules to combat the potential for bias in recruitment using artificial intelligence (AI).

Vendor evaluations

The alliance has launched its first initiative, "Algorithmic Safety: Mitigating Bias in Workforce Decisions” to help companies evaluate vendors based on their ability to detect, mitigate and monitor algorithmic bias.

"Every business strives to attract, motivate and retain the most talented and diverse people in the labour force," says Mary Barra, chair and CEO of General Motors. "This initiative will play an important role in our overall commitment to diversity, equity and inclusion by raising standards for trust and transparency in our human resources processes."

Through the initiative, member companies can supplement their respective vendor evaluation processes with education and criteria to evaluate suppliers of HR applications and solutions on their commitment to algorithmic safety. They can also get a qualitative scorecard and guidance for integrating the Algorithmic Bias Safeguards into their processes.

The safeguards include 55 questions in 13 categories that companies can adopt to evaluate vendors on different criteria, including training data and model design; bias testing methods; bias remediation; transparency and accountability; and AI ethics and diversity commitments.

Recently, the Inclusive Design Research Centre (IDRC) at Ontario College of Art & Design University (OCAD U) was given a planning grant from Kessler Foundation and Microsoft to explore bias in current hiring systems.

Latest stories