But concerns about employee privacy, AI prove challenging for HR leaders
When Loblaw first ventured into the DEI arena, the big push was focusing on gender and racialized or visible minority colleagues. And since setting representation goals and focusing on these metrics, it’s been able to close the gap at the enterprise level for these groups.
“That is attributed to a lot of work that's coming out of our data analytics,” says Jennifer Boyce, senior director, DEI at Loblaw. “We still have work to do in other groups — and that's the next phase of our strategy is how do we expand that and make sure that we can see those types of results across all marginalized groups?”
Boyce was part of a panel at HRD’s recent HR Tech Summit Canada, speaking alongside Pascale Alpha, chief DEI officer at CAE, and Brent Arnold, partner and technology subgroup director at Gowlings WLG. The three discussed the various ways HR technology can help DEI efforts when it comes to talent strategy — along with potential challenges.
Tech tools to boost DEI at Loblaw, CAE
“What has really amazed me is some of the low-tech options that really can make an impact,” says Boyce.
As an example, she cited Reclaim Your Name, a custom dictionary of Asian names that can be downloaded and plugged into Microsoft Word to stop 8,000 Asian names from being redlined or considered an error.
“It's so simple to use and simple to implement, but has a great big impact. So you don't always have to go big — even those small tools can have a major impact.”
There’s also a tool that provides a better understanding of the inclusivity of your job description, she says, which “has really made a difference.”
It’s also been valuable to leverage XML feeds to feed out to different job boards to attract different candidates, says Boyce, working with various partners across the country, as “another great way from a sourcing perspective.”
At CAE, a Canadian manufacturer in the airline space, HR created a new recruitment website that uses artificial intelligence to automatically connect them to company jobs.
“It removes some of the biases… all these biases that you may have when you look at [an application] are not there, it's the skills that will show up. And that helps for DEI,” says Alpha.
The company has also used a tool called an inclusion advisor which will inform someone, after they’ve written an internal message, that there might be a word that's biased in the content and suggest a new word, she says.
“In a way, it becomes a bit of a mini coach.”
Another important development on the tech side involves the company’s self-identification survey, with all the data being compiled in Excel, says Alpha.
“It was difficult,” she says. “So the team is working on having dashboards that come out in real time and connecting that data with other data. That's extremely important for us.”
Using technology to measure DEI
Gauging the ROI of DEI initiatives is no easy task, but CAE looks into the metrics when it comes to its employee engagement surveys.
“Now what we're doing is we're using technology to separate ‘OK, what is the [engagement] score for a women versus men?’ just so that you can see there's a difference. So these are all tools that can help you do your job better,” she says.
“[Overall] it’s trying to see, throughout the journey of the employee lifecycle, are there any issues, are you being inclusive, are there ways you can make the employee experience better so that they feel that they belong? Or is there some adjustment that you should make?”
Loblaw also uses engagement surveys to break down the data by different diverse groups, according to Boyce.
“We want to understand the experiences of the marginalized colleagues that we have within our workforce. And are those experiences equitable compared to our majority colleagues?” she says.
“That has really driven a big part of our strategy. Because the engagement survey gives us information right from the mouths of our colleagues. And we're able to gain a lot of insight through that and be able to look at where we need to focus and where we may have some biases that could be impacting the engagement across our company.”
While Loblaw has its representation goals, to measure how it’s progressing against those goals, the company leverages a self-identification program that's built into the HRIS.
“We use that at an aggregate level just to track and make sure ‘Are we trending in the right direction?’” says Boyce.
On that, Loblaw built what it calls “activity metrics” and uses these as leading indicators for representation.
“We're breaking down our data into talent development, into recruitment and compensation, engagement. And we look at these metrics to be able to tell us where we need to focus,” she says.
From a talent development perspective, Loblaw also monitors the diversity of its leadership programs.
“We shift left early on in the process to make sure that we've got the diverse pipeline, and that we’re developing candidates equally across the organization,” says Boyce.
And from a recruiting perspective, the company analyzes the types of applicants it’s attracting.
“Coming from being a woman in IT, if I heard one more time, ‘Oh, there's just not enough women in IT’ — that really frustrates me because in my mind, it could just be that you're not attracting the women that are out there.
“So attracting diverse candidates takes a bit more work, and we are doing that work. And we need to track and make sure that we are doing the right things to attract the right candidates.”
DEI challenges: Employee privacy and AI
Of course, one of the big challenges in pursuing DEI initiatives involves employee privacy. As an example, CBC faced an unwelcome spotlight recently when employees raised concerns about the use of their personal information for a “cultural census.”
So how can these concerns be addressed?
“The main thing is communication,” says Alpha. “People have to understand what you're doing with the data, that you're doing this for good: ‘We want to understand better our workforce and the makeup of our workforce so that we can improve, so that we can be more diverse and inclusive, so that [we offer], for instance, benefit programs and policies that will support the different underrepresented [groups].’ So you have to explain it and the leadership to be ambassadors for that.”
It’s also extremely important to show that the data is going to be taken care of, that there's not going to be any leaks and that only a certain number of people can have access to it, she says.
“I don't see the data, I only see an aggregate — it's just the people who work in [IT].”
It’s an area that Boyce says she is focused on right now.
“Because it is self-identification and it is voluntary, having a program that is rooted in trust is key. And this is still something we are working with.”
That data is very private data, she says, “and being very transparent on what you're doing to protect that data has been key in building trust.
“So we audit our controls, we're very transparent on the controls we have, who has access, how is it used. We always focus on the fact that it's used at an aggregate level only… And we work with our security group, we work with our privacy group, and we make sure they're signed off on our controls so that we can build trust in our colleagues.”
The retention of data is a huge concern, says Arnold.
“Under privacy law, you're supposed to keep that information for as long as you need it,” he says, citing as an example someone who is applying for a job.
“There are a number of class-action lawsuits I’m defending them right now where people, millions of people, applied for credit cards, many of them didn't get them, and the application data was held on to indefinitely.”
Also important to consider: Potential liability for an organization relying on a tool like ChatGPT.
“You're responsible for what it does. And if you were relying on it without understanding it, then there’s a huge black box probably ahead, and you're going to end up being responsible for that,” he says, citing the example of a U.S. judge who used an AI-based sentencing program that had certain prejudices built into it.
“That meant that the judges will be told by the software to assign sentences that were, on average, one-third longer than the ones the judges normally give out... so those people went to jail for longer because the software was prejudiced — and it took them months to figure that out,” says Arnold.
It’s a concern for Boyce. While AI can help remove biases, if it's not governed appropriately, it can actually create biases without us even knowing it, she says.
“That's my stay-awake issue right now is I think AI... is the way of the future, but we’ve got to be very careful about how it’s used because it can be very dangerous if it's not governed properly.”