Google workers question ethics of Bard chatbot

'AI ethics has taken a back seat,' says former manager

Google workers question ethics of Bard chatbot

Threatened by the rise of OpenAI’s chatbot, Google seems to have rushed to launch its competitor – but has it thrown ethics concerns out the window along the way?

Google launched Bard in March this year even though it was not ready for launch, Bloomberg News reported, citing statements from workers, current and former, at the tech giant.

In December 2022 – a month after OpenAI launched ChatGPT – senior leadership at Google decreed a competitive “code red” for the AI product and changed its appetite for risk. 

Google’s leaders decided that as long as it called new products “experiments,” the public might forgive their shortcomings, one employee told Bloomberg News.

That same month, Jen Gennai, AI governance lead at Google, convened a meeting of the responsible innovation group, telling them that they may have to make some compromises to pick up the pace for the Bard’s launch. This included allowing the release of Bard even though parts of it had not cleared some thresholds for the product to be of the best quality.

The innovation group submitted a risk evaluation stating that Bard was not ready for launch at it could cause harm. However, Gennai overruled that risk evaluation, Bloomberg News reported, citing people familiar with the matter. Bard was opened up to the public shortly after, with the company calling it an “experiment.”

However, Gennai, in a statement, said it wasn’t solely her decision, and an executive group “determined it was appropriate to move forward for a limited experimental launch with continuing pre-training, enhanced guardrails, and appropriate disclaimers,” she said.

Recently, Tesla and Twitter CEO Elon Musk and Apple co-founder Steve Wozniak, along with other 2,200 or so people, called for a six-month pause in developing systems more powerful than ChatGPT-4, citing risks to society and humanity.

Some employers have already replaced workers with ChatGPT since it was launched, according to a previous report.

Employee concerns

“AI ethics has taken a back seat,” Meredith Whittaker, a former Google manager and president of the Signal Foundation, which supports private messaging, told Bloomberg News regarding Google’s launch of Bard. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

In February, shortly before Google introduced the chatbot to the public, Sundar Pichar, the company’s CEO, told employees to contribute two to four hours of their time to improve Bard’s conversational abilities.

“A pathological liar” and “cringe-worthy” were some of the words employees used to describe the chatbot. Other workers also claimed Bard, when asked for suggestions, gave replies that “would likely result in serious injury or death,” according to the Bloomberg News report.

Meanwhile, Brian Gabriel, a Google spokesperson, claimed in the report that the company is “continuing to invest in the teams that work on applying our AI Principles to our technology.” 

Google let go three members of the responsible AI team in January, including the head of governance programs, according to the Bloomberg news report. 

In January, Pichai announced in an email to staff that Google is laying off 12,000 workers.

Latest stories