Are some of your workers afraid to admit to using AI, to avoid appearing incompetent?
Is there a “competence penalty” happening in your workplace?
A recent study suggests as much, revealing that the fear of being perceived as less competent is quietly discouraging many employees from using AI tools.
It found that even when access and training barriers are removed, employees — especially women and older workers — may avoid new technology if they fear it will make them appear less capable.
The result? Not only is adoption lagging, but existing workplace inequalities are being reinforced, with significant implications for talent management, equity and organizational performance, say the researchers.
AI adoption lags despite incentives
The study of nearly 29,000 software engineers at a major global technology company was done by Phyliss Jia Gai and Jiayi Hou of the Guanghua School of Management at Peking University, and Yanping Tu of the Faculty of Business at the Hong Kong Polytechnic University.
As part of the research, a survey found that only 41% of software engineers at the company had ever used the AI tool — despite 12 months of extensive promotion and the tool being “pre-installed on all company-issued devices and fully integrated into standard engineering workflows.”
The researchers emphasized that any differences in AI adoption “cannot be explained by variations in access, skills, policy, or job function.”
A closer look at the data revealed persistent and widening gaps: During the first month following the AI tool’s launch, the adoption rate was 9% among male engineers and 5% among female engineers, a gender gap of four percentage points.
Twelve months later, adoption rates rose to 43% for males and 31% for females, widening the gender gap to 12 percentage points. Mature-age engineers also lagged behind their younger peers.
Competence penalty and female engineers
When asked, the female software engineers said they had no problem with the idea of using the tool and felt it enhanced their productivity, says Gai, assistant professor of marketing — but still, they did not use it.
The researchers identified a key reason for this disparity:
“Using technology to assist task completion signals a lack of competence to perform the task independently. In high-stakes environments, where status, credibility, and career advancement are closely tied to the display of competence, relying on AI to perform tasks may result in a competence penalty (CP) for technology users,” they said in the study.
To test the competence penalty, the researchers conducted a randomized experiment in which engineers reviewed identical code but were told it was authored by either a male or female engineer, with or without AI assistance. The findings were clear: “On average, purported AI adoption significantly reduced competence ratings of the engineer,” the study notes.
But the penalty was more severe for female engineers — even though the quality of the work was rated the same for both men and women, says Gai.
“When the male author, a male engineer, uses AI, he doesn't get penalized for that, so his competence rating remains the same, but the female engineer, unfortunately, receives a significantly lower competence rating.”
In addition, more older workers at the company were hit with the competence penalty than younger workers.
What’s behind the penalty of using AI?
Why? One reason can be explained by social psychology, she says, where there is a classical, seminal theory that marginalized groups feel threatened when their group is oversized by another, such as females in STEM.
“It's pretty obvious to us that, indeed, at this workplace, females, as a marginalized group, [faced a] penalty for using AI, and they kind of sense that, so they avoid using that to establish their own competent signals to their peers and their managers,” says Gai.
The study also found that engineers anticipated the competence penalty, which in turn discouraged them from adopting AI tools. “Consistent with our hypothesis, logistic regression showed that higher anticipated competence penalty is associated with lower rates of AI adoption both independently… and when controlling for other AI perceptions,” the authors wrote.
The competence penalty was most often imposed by senior male engineers who had not adopted AI themselves, says Gai.
Laura McDonough, associate director of insights & knowledge mobilization at the Future Skills Centre, says she’s not surprised by the study’s findings of a penalty, “although it is a novel way to frame it and ability to really demonstrate it.”
“I think with a lot of our projects related to AI — whether it's research about who's being impacted or actual efforts to implement it and train people and adopt AI — the fear about the negative impacts has been high, generally speaking.
And the implications go beyond one company or sector, says McDonough.
“It's very unlikely that this is unique to engineering and coding. It's much more widespread than that.”
AI inequity and employee careers
While the findings of the study are AI-related, they’re rooted in gender inequality and status structures or power dynamics in society, says Gai — which means the findings apply to other marginalized groups such as racial minorities or people with disabilities.
And that suggests that employers and managers should look at the composition of their employee population, she says.
“They can look at what we call the vulnerable populations, like the mature workers and female workers, [who] probably feel threatened at workplace,” says Gai, adding this can be reversed; for example, if a small group of men work in a female-dominated workplace.
It’s already been shown that AI has bias issues, says McDonough, so “it’s not surprising that tech adoption just reproduces the prejudices that we have and discriminations across our culture.”
This study names a dynamic that exists in the broader culture of a sector or organization, she says, “and that is really where the behaviour change, where the change management needs to occur, is to dismantle that perception, so that the adoption can continue and have the desired impacts on productivity that everyone hopes it will.”
The competence penalty has potential ripple effects on career advancement, promotions and salaries, says Gai.
“Female workers are frequently underpaid, because a lot of times, salaries are not fixed, but negotiable, and a lot of female employees, they're not asking for a higher pay,” she says, citing a lack of confidence.
And when people are paid lower compared to others, they automatically believe that others must possess better competence than them, says Gai: “So, I think the introduction of AI or an equal adoption of AI can exacerbate this problem.”
Addressing the competence penalty
Both Gai and McDonough agree that HR professionals and organizations must address the competence penalty to ensure equitable AI adoption.
To that end, one important step is to focus on the output, the productivity, rather than the process, says Gai.
“Some tech companies in the States or Canada, the leaders themselves, for example, the CEO, demonstrates how they're using the AI and how the AI has... been making them successful in front of their employees, and they organize those activities that reward AI creativity tasks.
“So, I think if it can be a top-down approach. If the executives are using it first, that is probably making employees feel safer to use this tool instead of being judged.”
While this particular company had a team dedicated to promoting the use of the AI tool, the adoption gaps remained, she says.
“It’s not just enough to have a team for that — you really have to make people feel safe to use the tool and not just to be told that, ‘Oh, this tool is good for you, you should try it out’… you should really demonstrate this tool can augment your value at this company.”
Anxiety aspect of AI training
It's not enough to just develop the curriculum to teach a particular workforce how to use AI responsibly, agrees McDonough, citing research she’s done in healthcare.
“[There’s a] perception that, as a physician or a healthcare worker, if I’m using AI, that is somehow going to be perceived as having a negative impact on my patient care,” she says.
“There's often a cultural element and anxiety aspect of that training that needs to be accounted for in order for people to actually do that adoption.”
McDonough highlights the need for ongoing research and tailored training.
“We're so early in the game at this point, and… all of these issues are going to emerge, but we need to keep doing the research, keep checking the pulse to be able to navigate those and figure out ‘OK, this is something where we're going to need a guideline, where maybe we didn't know about this a year ago.’”
Pitfalls to mandatory disclosure of AI
A final note: The study raises concerns about mandatory disclosure of AI use in the workplace.
Gai suggests this kind of approach has its problems.
“If you ask people to report their AI use, probably marginalized groups do not use it at all because of the fear that it will be judged by others negatively. And, second, they can lie, they can lie about how much they use it. So, our suggestion is it's probably not a good idea just to force people to disclose that.”
Instead, employers may want to remove the “AI-generated” label to avoid the issue of discrimination — seen with the competency penalty — or “find a way to transform this label to a positive, desirable label, rather than a negative label with a stigma,” she says.
McDonough, however, emphasizes the importance of transparency, at least for now.
“I'm pessimistic that the solution is to not disclose. I think that maybe at some point in the future, it'll be so ubiquitous that we won't [need to] do that anymore.
“But for now, I think it's important for us to continue to do that — and we try to do that ourselves and encourage others to do it as well — in the face of a lack of overarching guidance around what actually should be done.”