Reports put spotlight on ethics of AI

United Nations, University of Toronto cite ‘worrying developments,’ call for greater oversight, transparency

Reports put spotlight on ethics of AI

In a hard-hitting report, the United Nations High Commissioner for Human Rights is warning of the potential downsides of artificial intelligence (AI) in a variety of contexts, including employment, and calling for major changes.

The commissioner cites “worrying developments” that include “a sprawling ecosystem of largely non-transparent personal data collection” along with “the risk of discrimination linked to AI-based decisions.”

Those concerns are echoed by a report from the Institute for Gender and Economy (GATE) at the University of Toronto’s Rotman School of Management, which states that care and attention are urgently needed by businesses and governments to curb the technologies’ potential to reinforce gender and racial inequities.

Fundamental problems to AI

While employers are increasingly using AI for “more efficient and objective information” to monitor and manage workers — including hiring, promotions and dismissals — the potential for invasions of privacy are a concern, says the UN department. For example, companies are increasingly collecting workers’ health-related data or using AI for workplace monitoring in people’s homes.

“Both trends increase the risk of merging the data from workplace monitoring with non-job-related data inputs. These AI-based monitoring practices constitute vast privacy risks throughout the full data life cycle.”

In 2020, Canada’s privacy regulator requested feedback on new guidelines for companies that employ AI to collect personal and private data, and in November 2020, it issued a regulatory framework.

It’s also become apparent that the “quantitative social science basis” of many AI systems used for people management is not solid, and is prone to biases, says the United Nations report The right to privacy in the digital age. For example, if a company uses an AI hiring algorithm trained on historic data sets that favour male, white, middle-aged men, the resulting algorithm will disfavour women, people of colour and younger or older people who would have been equally qualified to fill the vacancy.

Plus, the accountability structures and transparency to protect workers are often lacking, and workers are given little or no explanation about AI-based monitoring practices, it says.

“While in some situations, companies have a genuine interest in preventing misconduct in the workplace, the measures to uphold that interest often do not justify the extensively invasive practices for quantifying the social modes of interaction and connected performance goals at work.”

Inequality and inequity in technology

While there can be good outcomes from the use of AI in terms of making processes, products and services more efficient and accurate, this technology is being created in a social and historical context where inequality and inequity already exists, says Carmina Ravanera, research associate at GATE.

“Even though people may think that technology is objective, or completely technical, in fact, research has shown in many ways that it's not.”

For example, Amazon abandoned a recruitment tool that turned out to be biased against women.

And while including more diverse people within a data set can help solve problems, there are potential negatives depending on how the data is used. For example, when race is removed as a data variable from the dataset, different variables end up being proxies for a race, such as joblessness or economic insecurity, creating the same results, she says, “because racialized folks tend to experience these things more than people who are white.”

The algorithms have also been used to monitor employee performance, as seen with Uber or Amazon, says Ravanera, who is co-author of the report An Equity Lens on Artificial Intelligence.

“[Employees] need to get higher ratings or they'll be removed from platforms and things like that. And research has been done showing that can have poor mental health outcomes or increased stress and increased feelings of privacy invasions for employees.”

Any company that trains on models based on biased data further perpetuates this problem, says Jahanzaib Ansari, CEO of Knockri in Toronto, an AI video skills assessment tool.

“[It’s about] really making sure that there are steps in place, and eliminating that [bias] is extremely important… if there's any data that has been trained by humans, it is nearly impossible to fully get all the bias out of there and this is a huge problem.”

It’s a well-known fact that garbage in means garbage out, he says.

“The reason why the AI fails is because what you're feeding it is not good.”

The ethics of AI

Organizations may have seen AI in an advantageous way, but not have considered the unintended consequences, says Ravanera. The challenge is that prioritizing or exploring other issues beyond just the product and its efficiency is going to involve increased costs and consultations.

“If there's such a focus on the bottom line, above all other things, then we are going to continue facing these problems unless those other things are made more of a priority. And we've seen similar [issues] with diversity, equity and inclusion in companies where it's seen as not useful because it's seen as a waste of time and money,” she says.

“As long as systemic inequality exists in society — which it has for very, very long time and will most likely continue to — then all of the different ways that technology is coming into the mix need to be examined.”

However, there are a lot of suppliers that have built technology ethically from the ground up, specifically to tackle discrimination, says Ansari. For example, Knockri doesn’t train the technology on any human-evaluated data, because “that can lead to a can of worms,” he says. Instead, it’s merged industrial and organizational psychology to look objectively at the data.

“If there's a specific definition of a skill set, as an example, that's been tested, it's been correlated to success predictors, on a job role… there's not a lot of gray area compared to a human evaluator who might be evaluating somebody based on a gut feeling or how they feel about them on a particular skill. With any technology,

if you can objectively get a definition or you can objectively figure out what you're looking for, based on science, that would significantly mitigate a lot of the risk here.”

Fixes to AI bias include greater transparency, oversight

The UN report outlines a range of ways to address the fundamental problems associated with AI, including “urgently” identifying and implementing solutions for overcoming AI-enabled discrimination, systematically conducting human rights due diligence throughout the life cycle of AI systems, and increasing the transparency of the use of AI, including adequately informing the affected individuals and enabling independent and external auditing of automated systems.

The GATE report also says that governments should create policies for AI that prioritize accountability and transparency, and require organizations to adhere to these principles. It also recommends that creators, researchers and implementors of the technology prioritize aligning AI with social values such as fairness — despite possible trade-offs for efficiency and profit

It’s really important that any possible impacts of the data are assessed before it's released, and used in the public or certain communities, says Ravanera, “because if that analysis isn't done beforehand, that's when those harms start coming about.”

And that needs to become common practice for companies, she says.

“Of course, it can mean higher costs; that can mean longer time developing a product, so it's been suggested that there needs to be new industry standards put in place where it's just a norm for safe and responsible AI.”

There is also a need for greater enforcement or oversight. That means policies and regulations “where AI prioritizes accountability to people and transparency about what they're doing,” says Ravanera, citing as an example, the Artificial Intelligence Act from the European Union.

“Because AI moves at such a fast pace, there's always something new happening, always something new coming out, and it's hard for government policies to catch up with that. But it's something I think that's really imperative now, as we're seeing more and more of these issues pop up, that government start really taking a handle on things.”

Audits and checks with third-party agencies can help ensure both greater accuracy and greater confidence, says Ansari, who says he would welcome greater oversight from a governing body, citing as an example Canada’s Algorithmic Impact Assessment tool.

“It would really separate the snake oil from the real AI technology out there… as long as it doesn't limit innovation too much. Because, obviously, you have to have a fine balance... if Canada, or any kind of any country, wants to invest in innovation, these things are going to be a part of the solution; however, this body shouldn't limit somebody from creating and exploring technology.”

Latest stories