More outside input, particularly from academia, can help Silicon Valley reduce risk
By Gina Chon
SAN FRANCISCO (Reuters Breakingviews) - Big Tech could use some schooling on ethics. Alphabet’s Google drew so much flak over outside advisers it chose to help it responsibly develop artificial intelligence that it abandoned the effort. Yet it deserves credit for training some staff on the dangers of unintended bias in algorithms and other issues. More outside input, particularly from academia, can help Silicon Valley reduce risk.
Google recognized it needed assistance navigating the moral dilemmas presented by AI, but its outreach attempt backfired. Some 2,500 tech industry employees, academics and others signed a letter calling on the firm to remove a conservative think tank president from its advisory council on AI because she had criticized proposals supported by transgender, gay and lesbian individuals. Instead, the company disbanded the panel.
It’s an embarrassment for a company that has been proactive in getting employees to think about unintended consequences of their products. About 100 Googlers have participated in training based on a technology ethics project at Santa Clara University. It provides case studies, like creating a virtual assistant to help corporate executives, and challenges students to get out of their bubble and question assumptions. It also recommends remembering the “terrible people” who might steal, abuse or hack a product.
Other universities offer similar courses. Stanford’s has computer science students examine real world situations, like bias in bail algorithms used by judges or privacy issues arising from parents posting pictures of their kids on Instagram. Harvard and MIT teamed up last year to offer instruction on the ethics and governance of AI.
These initiatives are useful because technology can embed the prejudices of its human designers. Blacks and Hispanics hold only 13 per cent of engineering jobs while women have about 14 per cent, according to 2018 Pew Research Center data. That matters as tech takes over increasingly complex decisions from people, such as facial-recognition programs used by law enforcement.
The cries of bias against Silicon Valley are also getting louder. In a discrimination lawsuit brought by the Department of Housing and Urban Development last week, Facebook was accused of allowing advertisers to target ads based on race and religion. Coders and executives should recognize why that’s wrong. It’s a teachable moment for Big Tech.