Canadian researchers find inconsistencies, inaccuracies that could impact 'workplace entitlements and privileges, as well as discipline and termination'
“I'm not aware of any peer-reviewed scholarly work, or work otherwise, that shows how these applications improve the bottom line.”
So says Adam Molnar, assistant professor of sociology and legal studies at the University of Waterloo, discussing the impact of employee monitoring software.
Employers deploying this tech could face several issues around the productivity metrics, cybersecurity risks and employee privacy, says Molnar, who analyzed 10 popular employee monitoring apps to uncover what he says are design flaws and unintended consequences.
Fellow researcher Danielle Thompson, a PhD candidate in the same department, says it’s an important lesson for HR.
“It really speaks to the need for HR — if they are going to consider adopting one of these applications — to not just go based off of what is advertised to them and how the app works, because… these apps maybe aren't functioning the way they should be or as effectively as they should.”
And that’s a concern if these tools are giving an inaccurate representation of an employee's overall workday and their productivity, she says, “and that data is being used to make important decisions related to job outcomes.”
Tools claim to identify ‘time-wasters’
The desire for greater control by employers is understandable as more people moved out of the office, says Molnar, “but this particular response, by introducing highly invasive data collection practices that open up additional security risks and privacy risks, and that potentially... are not proven [is questionable].”
And vendors are aware of that, he says, which is why they use language in their marketing such as “Identify time-wasters” or “Find out who’s lazy” to frame the apps as insightful for employers.
The employers’ interest in this software “has to be calculated against the necessity, so whether the monitoring is necessary for a specific purpose,” says Molnar.
“And given the huge range of functionalities, the sensitivity of data that's captured, the types of content that are captured, and how that extends beyond just those legitimate workplace activities — and then, of course, all the security and privacy vulnerabilities at code level — I think alternative, less intrusive means are the way to go.”
Default settings more intrusive
The apps — such as Hubstaff, Clever Control and Spyera — are installed on employee devices and typically monitor a range of behaviours by tracking keystrokes, browser activity, time spent in documents, print jobs, social media use, emails — and, in some cases, use facial recognition technology.
Most run in the background, invisible to the employee, while feeding real-time data to manager dashboards. Many distinguish between good performers and bad ones with colour coding.
“They're taking this vast, vast, vast amount of data and they're using it to produce at least what the vendors are calling productivity metrics or analytics, or workforce analytics or insights,” says Thompson.
Some of the apps can be modified within the manager portal, so they can toggle certain features on or off, specify which employees should be monitored or customize how each employee is being monitored, she says.
And while some applications advertise that they have employee protective features, the researchers found those don’t happen by default.
“Managers might believe this is a less invasive software that I can use to monitor my employees in a way that's more protective of their privacy, but if they aren't knowledgeable enough to know, ‘Oh, I actually have to go and turn those features on’… then they're still monitoring their employees in invasive ways,” says Thompson.
False security, inaccurate measurement
To do the research, Molnar and Thompson carried out mock employee-employer scenarios with each app — for sectors with a high capacity for remote work, such as finance or education — and discovered some questionable features.
For example, a button labeled "stop and save" in one app merely paused a visible timer — monitoring continued in the background, says Thompson.
“That is a big concern if we're talking about employee privacy, because they might press that button and think, ‘OK, now I can look up this really private information’ — maybe it's about their health, maybe it's some other kind of private information — and think that their computer is no longer being monitored.”
In another, lag between user activity and data appearance created inaccurate timelines. The researchers simulated a workplace environment where an employee alternated between tasks that would be considered productive and work-related, and tasks that would be considered lazy or distracted, like social media.
“We fed it all of this fake data… and there would be a large lag time between what activity I was currently on and when it was reflected within the dashboard,” says Thompson. “That gives them an inaccurate representation of what the employee is actually engaged in.”
A further example of false security? One company said managers using the software could comply with the European Union's General Data Protection Regulations (GDPR) — considered the gold standard for this type of protection — by making that selection.
But in looking at how the app functions at the code level, “it’s a button to nowhere,” says Molnar. “We found that, indeed, toggling that switch did not lead to... changes at code level.”
Problems with productivity metrics of apps
Another consideration? The apps’ metrics overwhelmingly focus on time and activity — not outcomes or work quality, according to the researchers, who will be presenting their study at the upcoming Congress of the Humanities and Social Sciences.
It's a critical issue, says Molnar, because these apps are associated with scores or ratings of employees, such as exemplary or untrustworthy.
“Workplace entitlements and privileges, as well as discipline and termination, are all hinging on these interpretations, and if the data itself is inconsistent or has inaccuracies, then that's a problem,” he says.
Many award badges such as “time hero” or “efficiency pro” based purely on device usage, not task completion.
“These apps create this false sense of productivity,” says Thompson.
“There's lots of applications that are pushing, essentially, the more time that you work, or the more active time you have on your device, the more productive you are, which, as we know, isn't always the case.”
There's also the concern that important work-related tasks occur off-device and are not accounted for within the metrics, she says.
“How do we define productivity? Is it the more active you are on your device, or is it things like the work outcomes? Are you actually completing the tasks that are assigned to you, the quality of the work? Is that how we're assessing productivity? Or is it literally just the amount of active mouse clicks and keystrokes you have on your device?”
Cybersecurity concerns with monitoring tools
The researchers’ most sobering finding may be the cybersecurity risks embedded in these tools. In one part of the study, the researchers worked with computer scientists to do code-level analysis of the applications, says Molnar.
“We examined them for security and privacy vulnerabilities and found that there were numerous vulnerabilities that would expose sensitive employee personal information… and make it fairly trivial to intercept or hack.”
In addition, one vendor boasting Fortune 500 clients was found to transmit employee data over unencrypted channels — leaving both personal and corporate information vulnerable.
“And that is interesting, because it's not just employee information, it's sensitive IP as well,” he says.
“The actual security of these applications — once they've deployed them in the workplace — could actually undermine cyber security and enhance the likelihood that sensitive company information and sensitive employee information could lead to a breach. So, there’s a disconnect there.”
‘Intrusive’ surveillance: Employee privacy concerns
When these applications are installed, they’re often not visible to the employee — so the individual could unknowingly share private information during a break or off-hours when they don’t realize they’re being monitored.
“It’s not only the employee’s personal information that would be captured,” says Thompson. “Because of the invasiveness of some of these applications — keystroke monitoring, email monitoring, social media, browser activity, other user behaviours — that combines personal information into the flow of legitimate business information. It doesn't separate these two things.”
And when people are working from home, there’s an “overreach” with these tools compared to what they’re used to in office environments, she says.
Some of the features of the employee monitoring applications (EMAs) are highly invasive, agrees Molnar.
“Cameras, if they are in a home environment, could capture not just the employee themselves, but other members that are living in the household, in the background or even beyond… and could give owners or managers insights into an employee that they shouldn't have access to, or would otherwise [such as] religious or political affiliations.”
Regulations lacking, say researchers
While some provinces are pushing for greater transparency with this newer tech — such as Ontario’s Bill 88 requiring companies with more than 25 employees to disclose monitoring practices — there are no limits on data collection or use, says Molnar.
“It’s a notification regime, basically,” he says. “It doesn’t provide any detail on placing limits on the kind of monitoring, on the kind of data that’s collected, or any limits on how the data would be used.”
The new rules don’t provide employees with any sort of protections about the degrees of monitoring that take place, says Molnar, or “whether the monitoring is necessary and proportionate, and whether that monitoring is actually balanced with other employee rights, which is what we see in other jurisdictions.”
By contrast, the UK’s Data Protection Act requires a three-part test of purpose, necessity, and proportionality.
“In other words, are there alternative, less intrusive means that can be undertaken... to place a limit on the range of data that's collected, including sensitive information?” he asks, citing as an example political expression, biometric data, sexual orientation or health details.
“That can all be collected through some of the features that are included in these EMAs that are not necessary to carry out the specific work activity.”
Best practices for HR with EMAs
Molnar and Thompson’s research has been shared with stakeholders including the BC General Employees Union and the Ontario Information and Privacy Commissioner. They continue to advocate for stronger regulation, increased transparency, and better education for both managers and employees.
“We’ve reported all these vulnerabilities back to the companies to notify them... we’re now looking at the gaps in provincial law across Quebec, Canada and BC,” says Molnar.
In the end, the researchers aren’t calling for a complete rejection of monitoring — just a smarter, more ethical approach.
“I would suggest that if companies or HR are actually considering the adoption of these software, that they put some serious consideration into the app that they're using, how it functions, and whether they actually need to be collecting all of this data in order to determine if their workers are being productive,” says Thompson.
Managers ought to consider outcome-based monitoring instead of behavioural monitoring, says Molnar.
“These apps favour behavioural monitoring, and that necessarily implies that there's better benefits for a company to collect all this granular data about their workers to boost productivity and efficiency — but it doesn't actually do anything to measure the quality of the work.”