Remote work offers 'trade-off' in shielding women from gender bias: study

Women say they experience less discrimination working remotely – can AI help combat in-person bias?

Remote work offers 'trade-off' in shielding women from gender bias: study

As organizations push for a return to the office, a new study from the University of Toronto’s Rotman School of Management reveals an uncomfortable truth: the workplace remains a site of persistent gender discrimination.

The research, which surveyed over 1,000 professional women in hybrid roles, found that incidents of gender bias occur significantly more often in person than in remote settings. This disparity raises an important question for employers: how can workplaces be structured to reduce gender discrimination without making remote work a necessary refuge for many women?

Laura Doering, co-author of the study and associate professor of strategic management at the Rotman School of Management at the University of Toronto, underscores the complexity of the issue.

“What I think we really didn't know before was just the tremendous disparity in the incidents of gender discrimination between these two locations. And now that we know this, we can better evaluate this trade-off for women between working remotely and working on site.”

Gender bias still a workplace reality

The study revealed that when working on-site, 31% of women reported experiencing gender discrimination, compared to 17% while working remotely. For women who worked primarily with men, the gap was even wider: 58% faced discrimination in the office versus 26% remotely.

Younger women under age 30 were also more likely to experience gender discrimination on-site – 31% compared to 26% for older women – with only 14% of younger women likely to experience it while working remotely. 

The female workers were asked to report their perceptions based on 11 different forms of gender-based slights and offenses that included inappropriate attention, having their ideas ignored or stolen, being assigned tasks unrelated to their job, being excluded by co-workers and being addressed with a sexist name during a meeting.

These statistics confirm what many women have long known: being physically present in the workplace increases their exposure to biases, ranging from having their ideas ignored to outright harassment.

As the study notes, this reality creates an unfair trade-off for professional women, forcing them to choose between career advancement opportunities associated with in-person work and the psychological safety of remote work.

“You can create policies. You can create laws around fair hiring practices and setting their wages. It's harder to create a law that says, ‘Don't take women’s ideas in meetings. Don't interrupt women in meetings,’” Doering says.

“You can't legislate that kind of stuff. And so it's more insidious.”

Why remote work reduces bias

Remote work doesn’t eliminate gender bias but does seem to reduce its expression in day-to-day interactions, the Rotman research found. One explanation is the structured nature of virtual communication.

In remote settings, meetings often follow a set agenda, giving participants a more equal opportunity to contribute. The absence of physical proximity also removes certain forms of bias, such as body language-driven exclusion or inappropriate attention.

“There's all these things about the physical body that are just not as obvious, and so it means that we don't organize our behaviour around those things in a way that we might if we were in person,” says Doering.

However, she stresses that while remote work may create a more level playing field, it is not a replacement for an inclusive workplace culture. Instead, the focus should be on translating the benefits of remote work into on-site environments.

“There’s still a lot of work to be done in our in-person, on-site interactions, and in some ways, it’s unfortunate that retreating to home is one of the solutions to addressing this issue,” Doering says.

“These are the waters that we swim in from the time that we're born, and it's really hard to pull a policy lever or create an organizational practice that upends a lifetime of training in how you're supposed to treat certain groups of people versus others.”

Leveraging AI to detect workplace gender bias

Serena Huang, founder of Data with Serena in Chicago, says one way that hidden biases in workplaces can be identified is through AI. It’s a revelatory new strategy, she says, because sometimes individuals don’t realize themselves that they’re being discriminated against, and this reality can skew data that comes only from self-reporting.

“Something that Gen AI, particularly, is really good at is understanding words or understanding text,” says the data expert.

“So, you don't have to read thousands of comments anymore, and you don't even need to know AI. You just need to use it.”

Huang explains that AI-savvy and mature HR departments who are familiar with using new tech can leverage generative AI (genAI) by loading performance management documents and having them analyzed for biased behaviour.

“AI will tell you things like ‘Wow, women receive unhelpful feedback’ or ‘The feedback is very harsh against a particular group of employees.’ And all of that really helps uncover the real bias that we may not see.”

Analyze HR data for gender bias

Huang explains that AI can analyze language patterns in employee feedback, performance reviews, and workplace communication, revealing disparities between how male and female employees are evaluated.

“Let's say you have a group of female project managers versus male project managers, and you're hearing a lot of complaints from female project managers saying it's impossible to get promoted, it's impossible to get pay increases, things are so unfair.” Huang explains.

“What does the performance review say? Using AI to summarize … you can quickly see just even the language being different.”

GenAI can even be used to analyze the tone of performance reviews and communication, she adds, identifying where certain managers may be unconsciously biased in their comments. One way of doing this is by asking genAI to describe the tone of groups of performance reviews.

“Pull out all the documents that they have written for feedback, and see, do they treat their employees differently?” Huang recommends.

“Again, maybe not conscious at all, maybe not intentional at all, but a lot of times they might just watch their words a little bit more with someone who's not from the same background, and then that can result in unhelpful feedback, for instance.”

Identifying exclusion and gender bias in workplace dynamics

AI can play a role in detecting other subtle patterns of workplace exclusion which often go unnoticed; Huang describes a scenario where employees are unintentionally left out of key discussions.

“In hybrid work or remote work, you can use the metadata on meetings … to identify points of exclusion,” she says.

“For instance, let's say a manager invites people to do some brainstorming, and this manager never invites the newest employee, they forgot them for whatever reason. Or maybe they were just really quiet – they didn't contribute. Or, maybe this meeting is happening at 4 p.m. and it's time for women on the team or caretakers on the team to pick up their kids. And then, over time, this manager just doesn’t invite them anymore.”

Huang explains how these patterns, while often unintentional, can accumulate into broader inequalities in workplace opportunities, reinforcing existing gender gaps in leadership and pay. By using genAI tools to analyse data such as frequency of emailed invites to meetings, HR teams can identify where these everyday exclusions are happening.

AI as a tool, not a solution

While AI offers a promising way to detect bias, Huang cautions that organizations must handle employee data responsibly and legally, reminding HR professionals that performance and review data also contains sensitive information.

“There's not a 1-800 number or a toll-free number you can call and say, ‘I would like this data back,’” she says, advising that legal sign-off should always be obtained before working with any employee data.

She also emphasizes the importance of transparency with employees: “The other flip side of the coin is, of course, communicate with transparency to employees that your data will be used in this way. I think sometimes HR forgets that you’re supposed to tell people that you’ll be using their data for certain purposes.”

If used responsibly by knowledgeable HR teams, or with the assistance of accountable third-party providers, Huang stresses the potential of data and genAI to make real impact on bias in the workplace. She highlights the following best practices:

  • Use genAI to analyze workplace feedback and performance reviews for hidden bias. AI can identify patterns of unhelpful or harsher feedback given to women compared to male employees. This can highlight discrepancies in how employees are assessed and supported. 
  • Identify patterns of exclusion in meetings and other everyday workplace interactions. “In hybrid work or remote work, you can use the metadata on meetings … to identify points of exclusion.” AI can track who is invited to key discussions and whether certain groups – such as new employees, women or caretakers – are being unintentionally left out. 
  • Analyze performance reviews for different tones used for different groups. AI can determine if some groups receive more negative, vague, or discouraging feedback compared to others. “I would ask genAI to analyze the tone, say: ‘Analyze the tone of this performance review feedback, or something like that.”
  • Ensure legal oversight before implementing AI solutions in HR. “Before any employee data goes anywhere, whether it’s an in-house AI, or anything else, that needs to be signed off because you don’t want to create risk if it becomes discoverable.”
  • Increase transparency in how workplace data is used. Employees should always be informed beforehand about how AI is analyzing their interactions and feedback. 

Latest stories