New research suggests women’s lower use of generative AI driven by risk concerns, not skills — and Canadian employers need to pay attention
As generative AI becomes embedded in everything from drafting emails to preparing reports, a divide is emerging in who uses it and how. New research reveals that women are more likely to question the quality, ethics and downstream consequences of AI-generated work, while some colleagues lean into the speed gains.
Catherine Connelly, professor of human resources and management in the DeGroote School of Business at McMaster University, says women’s hesitation is rooted in what they are seeing in the news about AI harms – and the issues are not abstract, but reflections of concrete risks that are increasingly visible in public debate and in workplace discussions.
“It doesn't surprise me that women notice stories like that and think, ‘That's horrible, that's dystopian, I don't want any part of that,’” she says. “Who knows what's to come? I think that level of skepticism is very understandable.”
Culture, ‘halo effects’ and speaking up about AI
The Oxford University research paper, “Women Worry, Men Adopt: How Gendered Perceptions Shape the Use of Generative AI”, frames women’s cautious approach to GenAI as “other-oriented”, meaning rather being concerned about personal risk, they worry about social consequences (mental health, environmental, employment, etc.).
From Connelly’s perspective, women’s concerns overlap with those of many other workers — but they may feel them more acutely. She notes those concerns are especially relevant for employees who handle sensitive client data, trade secrets or regulated information; uploading that material into a public AI tool, even for efficiency, can carry legal and reputational risks, and workers may be unsure where the boundaries lie.
“I think the issue right now is that AI has such a positive halo around it, where AI is the future, AI is high-performance, AI is everything good about the future of work,” she says.
“So anything you would say against it is against progress, basically.”
In environments where AI is framed as unquestionable progress, Connelly says, employees who raise concerns — often women, according to the research on risk perceptions — may worry about being seen as blocking innovation or being “anti-tech.”
This pattern rarely exists only around AI, and organizations should create space for critical dialogue about any new tool or process, including generative AI, as part of valuing employees’ judgement and expertise, she says.
“A company that has this halo around AI, they likely have a halo around other things, too... So I think it's unlikely that it's just AI that you're not allowed to question, but there's a lot of value in organizations, of being able to question things and debate things openly, honestly. That's how you get better ideas.”
For employers looking to close gender gaps in AI use, Connelly says this may mean explicitly inviting questions and building feedback loops into AI rollouts, rather than assuming silence equals agreement: “Ideally, an organization would promote a dialog about any tool that's being used in the workplace, or any process that's being adopted, just because the people that you have in your company, you value their opinion. That's why you hired them.”
Audit current genAI use before expanding
Before employers can address gender gaps in AI use, Connelly says they need a realistic picture of what is already happening, including sanctioned tools embedded in enterprise software, and unsanctioned use of free web-based services.
“I think companies need a better handle on that. In terms of what tools are being used, how it's being used, when it's being used,” Connelly says.
“If the issue is people have not enough time to do anything, or they don't have the skill sets to do it themselves, and they're relying on these tools, the company needs to understand that background.”
Once they understand current processes, Connelly says employers must decide what practices and uses they actually want – and design accordingly. That clarity can also help address gender gaps by making it clear that cautious use and verification are valued, not seen as inefficiency.
“If the output that they're creating matters, and often it really does, then they need to bake that into the process,” she says.
“That could become a second step in the process, where a different person is checking what's being produced by someone else.”
Performance management, accountability and the gender gap
Connelly highlights the tendency that short-term performance measures have to unintentionally favour heavy AI users and penalize more cautious employees; given that research shows women are often more hesitant to use generative AI due to risk concerns, these types of metrics could contribute to a gendered gap in ratings, pay and promotion.
“This more cautious approach would be better in the long-term, that may not be highlighted by a short-term performance appraisal system,” she says, explaining that caution shouldn’t be downplayed because it contributes to long-term benefits rather than short-term productivity gains.
“I think it's more important to take this broader view.”
Connelly suggests that performance appraisal systems should explicitly recognize and reward tasks such as verification, risk spotting and process improvement — work that is essential to safe AI use, but not always visible in output counts: “The people who are either putting thought into the process or who are carefully checking to make sure there's none of these catastrophic errors.”
She also warns against letting AI become an all-purpose excuse when things go wrong.
“If people are allowed to just blame AI and that's an excuse, then it's a removal of accountability in the workplace,” she says. “And that's a problem. You want employees to take accountability for what they're producing.”