AI monitoring rises to new levels

With AI software purporting to detect issues such as sexual harassment, disengagement and productivity,employee monitoring is rising to new levels. But employers must be careful when it comes to data quality,transparency and over-reliance, finds Sarah Dobson

AI monitoring rises to new levels

Basic employee monitoring options have been available for years, whether it’s watching for fraud in benefits claims or keystroke logging to catch inappropriate activity.

But as artificial intelligence (AI) becomes more sophisticated, the options are broadening. Take, for example, a bot that goes through company documents, emails and chat to identify digital bullying and sexual harassment by Chicago-based AI firm NexLP.

There’s also the Isaak system by Status Today, a London, U.K.-based company that looks at email activity to measure employees’ productivity, well-being, collaboration and engagement patterns.

Or how about Veriato in Palm Beach Gardens, Fla., a company providing employee monitoring and insider threat detection software that can record and track employee online activity, with video playback, while providing productivity reports and alerts.

That’s the beauty of AI, according to Pete Nourse, chief marketing officer at Veriato.

“If you're monitoring every employee in a 10,000-person company, 24-7, the amount of data is just crazy and it would be virtually impossible for us humans to sift through that and figure out: ‘Does this look right? Does this look wrong?’ And that's what the AI is great at,” he says.

“When something looks unusual, that's generally what it's looking for, and/or very specific things…  bullying terms, harassment terms, violence, sex, gambling… none of that stuff should be happening in your workplace.”

But the new tools raise more than a few questions: Do they invade employee privacy? Do they contribute to a culture of mistrust? Do they inadequately try to do a manager’s job?

“You do want to make sure that you're not swatting a fly with a sledgehammer where you're bringing in an intrusive system, poorly validated, in order to maybe, possibly, potentially give you some insight into a set of concerns that you might have been able to get at just by more sensitive and nuanced management work,” says Chris MacDonald, an associate professor and consultant on ethics at the Ted Rogers School of Management at Ryerson University in Toronto.

Benefits to monitoring tools

On the other hand, AI in employee monitoring offers the opportunity for insight that would have been literally impossible previously, he says.

“The temptation is to want to do something quantitatively and at least semi-scientifically that would have taken really subtle and nuanced leadership skills before. So, in the past, HR issues were part of the art of management as opposed to the science of it, so it was incumbent on the leader to have the sense of the pulse of the culture of their organization and to be able to read between the lines. And this kind of technology at least claims to be able to say, ‘No, no, it's not just a matter of the art of management…  we can detect algorithmically whether there's a problem.’”

AI and machine learning learns everybody's behaviour and understands it, so it sets a baseline, says Nourse.

“It creates a digital fingerprint for everybody. And that's what it’s really looking for… a deviation from what your own personal norm is, but it's also looking for a deviation from the other people in your assigned group.”

In using psycholinguistics, the AI is also watching and analyzing everyone, he says, looking for signs of disengagement, for example, which could lead to company theft or a valued employee quitting.

“We always say, ‘Try to handle it when it's an HR situation before it gets to be a criminal or fireable offence,’” says Nourse. “Companies spend a lot of money training people, getting them up to speed, and, sometimes, it's hard to know when someone's disengaged [and] they're starting to look around. So maybe they can address it: Why are they disengaged all of a sudden? Maybe their boss is a jerk or they're being overworked… This can shed some light.”

Employers can also set alerts in the system around people working long hours or after-hours, to combat overwork or overtime requests, he says.

Status Today has figured out methodologies where it is able to assess the way people communicate and collaborate, and how that is affecting their respective positions in the organization, says Ankur Modi, founder and CEO of Status Today.

“Taking these networks into account, we establish who are the influencers, who are the nodes, who are most connected to the rest of the organization, the most engaged, most influential — not by themselves, but also based on the people they talk to. Because these people implicitly become the hubs, the bridges where everything happens.”

More data is better data

While this all sounds well and good, there are definitely some considerations for employers going down the path.

For one, you've got to have a lot of data to train up your algorithm, which is supposed to get better with the more data you feed it, says MacDonald.

“Combine that with the tendency among tech companies to get a minimal, viable product to market and then hone it, it's entirely possible that if you're buying access to one of these algorithms today, it's really a beta product, and your employees are part of the fodder that's helping train the algorithm.... If you're the 30th company to buy it, maybe you're getting a well-validated product. But I would be awfully cautious about a product that may well, fundamentally, be experimental.”

Employers should definitely ask how robust the data is, as there can be different sociocultural groups within a population, and different types of cultures have very different kinds of linguistic patterns, he says.

“You would want that to be taken into account both in terms of how an algorithm is trained up, but also in terms of how you're using it.”

And while people know they should be professional in their communications, sometimes, they’re less formal or restrained, which might not match up with the AI.

“Even if you know that technically your employer owns your emails, that doesn't mean that employees actually expect their emails to be read for real. And so it raises worries, I think, about the extent to which an employer owns you during working hours. And part of that is the extent to which you're monitored,” says MacDonald.

If there’s a problem, AI doesn’t solve it, he says.

“It gives you at best a lead and what you choose to do with that information is an entirely separate question, and one that's got to matter a lot, because, in a lot of cases — and this is true for consumer surveillance — it's not so much that we're worried about the information itself, it’s [that]we're worried that someone's going to misuse it.”

One rule of thumb? Collect as little data as possible, and don’t be very intrusive, says Modi.

“Don't try to create a monitoring culture where everything people do, everything an employee does, is captured. Have some sort of measured approach where activities that have an impact could be measured, whereas the rest of them are left to human nature.”

It's really about the proportional collection of data to build trust, he says.

“How you do your work should largely be left to you to figure out… Data collection should almost exclusively be on a metadata level, rather than on the content and the depth of it. And that's largely because there are a lot of cultural differences, there's a lot of biases in the content. Staying on the metadata at least provides some level of objectivity to say you should not be reading people's emails or files or things like that, you should stick to a very top-level, objective view of inbound and outbound communication, content activity or HR data as it might be.”

Intrusion and transparency concerns

Of course, there’s also the issue of people knowing they are being monitored all the time, says MacDonald.

“Legally being allowed to do something doesn't mean that it's OK to do it… That comes down to things like the reasonable expectations of privacy, the extent to which you trust your employees and want to convey to them that you trust them,” he says. “It’s a pretty serious choice to make as to whether you're going to use this kind of technology. And, more specifically, how you're going to use it, what purposes are you going to use it for? And what human systems are you going to have in place to deal with and respond to that data?”

It's only fair for people to know that they are being monitored, says MacDonald.

“As an employee, if I found out my employer was using some kind of data mining to  look at email patterns to see if there was anger or disaffection or some kind of disgruntlement in a way that would help them fix problems, I might be OK with that.”

While there’s definitely a benefit to being able to predict or gauge when an employee might jump ship, based on their activity and communications, he says, what’s the cost of that monitoring?

“There is something to be gained by illegal searches by the police, too… but we worry about it. When it becomes intrusive, we worry about abuse of power and a whole bunch of related ethical issues.”

If an employer is using an AI tool to monitor employees, the privacy analysis really wouldn't be any different than usual, says Suzanne Kennedy, a partner at Harris & Company in Vancouver.

“Transparency and reasonableness are the key fundamental principles in privacy,” she says. “By way of an example, in B.C. in the private sector, employers are allowed to collect employee information without their consent, but they have to tell them what it is they're collecting and how it will be used.”

So, if an employer is using AI for monitoring, it would be really important to make sure that there are policies and it’s transparent with employees about what it’s doing, what's being collected and how it may be used, says Kennedy.

“The best recommendation to an employer going down this path is just to make sure that they do their homework first and that they've thought through: ‘OK, first of all, why are we using it? Is the information we're collecting going to be useful for those purposes? Are there other less invasive ways to do this? Or are there ways we can utilize this technology that are less invasive? And are we currently communicating with our workforce, so that people aren't unpleasantly surprised that this is what we're doing?”

In general, people are more comfortable with being monitored because it’s become so commonplace, says Nourse, citing Google’s surveillance as an example. And if the AI is looking at everybody across the board, then you're not running into a risk of discrimination or the like.

“It’s [about] ‘We watch everybody, we want to make sure everybody's playing by the rules and doing good… It's not like we're going in and watching everything everyone's doing.’”

The software preserves people's privacy within a work environment as much as is reasonable, he says, “but it triggers when something unusual happens.”

Also of note: The data collected should not be exclusively visible to management, says Modi.

“While there might be different legal answers to this, the ethical right answer is share this information back with the employees... If companies are collecting data and they're worried about trust and privacy, the simplest way to alleviate that is by sharing whatever information you collect with the employee in question… If I could see what information on my name was being captured, then I would have a bigger assurance to understand the decisions that are made from it, or influences that are being derived from it, are accurate… That creates a very transparent culture.”

A crutch for management?

Another big concern around the use of AI is the lack of human involvement and over-reliance on technology.

“This is always the worry when managers get their hands on new quantitative tools… that you're going to start focusing on those things that you know how to measure,” says MacDonald. “Suddenly, you don't feel like you need to actually talk to your employees or keep your fingers on the pulse of the culture. Because, after all, you've got this algorithm that's going to tell you if there's any problems. You’ve got this algorithm that's going to tell you who the high-potential employees are,” he says. “You worry that this starts to become an opportunity for managers to just absolve themselves of responsibility for the fundamentally human task of managing.”

But these kinds of tools give people the information to be better managers, says Nourse, in seeing who is working hard or not working hard, for example.

“Maybe you see someone struggling because they need some more training and you don't realize that but, all of a sudden, the productivity report shows this person is working hard, but they're just not doing as [much work] as their counterparts. Well, they need some training. They're not lazy. [You] can see from the reports that they are working hard; they're just not producing as much so maybe we need to help them a little bit more.”

The goal of AI is to provide context at scale, says Modi.

“The decision-making is eventually a human job. So, in situations like that, when we talk about engagement, the role of the AI is to contextualize engagement, understand whether there is a change in pattern, understand whether there are other factors involved, and present all of this evidence back to the manager, who can then sit down and discuss this with the employee. Because that change in engagement might be intentional, it might be a sign of a bigger problem, it might be simply because the relationship with the manager has broken apart.”

AI can process information in such high volumes and such high amounts that the level of insight it generates is unprecedented, he says.

“A lot of people assume or are scared of AI taking the decisive role and using that information to automatically decide that ‘Oh, this team is disengaged, this team is important, this team is not important.’ That's not the role that AI needs to play.”

And open-ended data collection is not only dangerous, it promotes uninformed conclusions because not all managers are trained to read data the right way, says Modi.

“If you present data without context, without purpose, then people will make conclusions out of it and believe that they are being data-driven, where in reality, they are being more biased and not less. So, I think it's really important to have a defined purpose, training and legitimate and relevant data that is then collected to inform that,” he says.

“We're talking about people data. This is sensitive. Even a small mistake in decision-making affects someone's career, affects someone's life.”


EMPLOYEE MONITORING IN THE U.K.

The least acceptable forms of surveillance:

  • facial recognition software and mood monitoring (76%)
  • monitoring of social media accounts outside of work (69%)
  • recording a worker’s location on wearable or handheld devices (67%)
  • monitoring of keyboard strokes (57%)

Source: Trades Union Congress


6 in 10

Number of people who fear that greater workplace surveillance through technology will fuel distrust (65%) and discrimination (66%)

56%

Percentage of workers who believe they are monitored by their boss at work

3 in 4

Number of workers who say bosses should be banned from monitoring them outside of working hours

Source: Trades Union Congress


30%

Number of employers using some type of non-traditional monitoring techniques in 2015

50%

Number of employers using some type of non-traditional monitoring techniques in 2018

10%

Number of employees comfortable with their employer monitoring their email in 2015

30%

Number of employees comfortable with their employer monitoring their email in 2018

43%

Number of employers that think monitoring employee conversations is an invasion of privacy in 2015

10%

Number of employers that think monitoring employee conversations is an invasion of privacy in 2019

Sources: Gartner, GetApp

Latest stories