As per a report from Bloomberg, numerous current and past Google staffers expressed severe skepticism regarding the company’s chatbot Bard through internal communications. The chatbot was denounced as “a pathological liar,” and employees urged the company to abstain from introducing it. These statements were based on conversations held with 18 individuals associated with Google and screenshots of internal messages.
During internal discussions, a team member observed Bard’s tendency to offer perilous guidance to users, including on matters such as aircraft landing and scuba diving. Another participant expressed a negative opinion of Bard, stating, “Bard is more harmful than helpful. It would be prudent not to proceed with the launch.” In addition, Bloomberg reported that the organization ignored a risk assessment put forth by an internal safety unit, which declared that the technology was not prepared for widespread usage.
According to a report by Bloomberg, Google has seemingly deprioritized ethical considerations to maintain its competitive edge against rivals such as Microsoft and OpenAI. While the company frequently highlights its focus on safety and ethics in AI, it has faced longstanding criticism for emphasizing profits over ethical concerns.
Brian Gabriel, a representative for Google, asserts that the company continues to give great importance to AI ethics as this remains a top priority for Google, as per Bloomberg. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said the spokesperson.
As per recent statements from current and former employees, the team in charge of ethical considerations at Google has reportedly been left disempowered and demoralized. It has been alleged that the individuals responsible for assessing the safety and ethical implications of upcoming products have been instructed not to interfere with the development of generative AI, despite potential risks associated with this cutting-edge technology.
Google is striving to modernize its search business through state-of-the-art technology, potentially incorporating generative AI in smartphones and homes worldwide – to pre-empt the initiatives of competitors like OpenAI, backed by Microsoft.
Meredith Whittaker, the president of the Signal Foundation, an organization supporting private messaging, has voiced her concern that AI ethics has been pushed to the back of the agenda. Whittaker, who formerly worked for Google, expressed disappointment that ethical considerations appear to have taken a back seat. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work,” she said.
Jen Gennai, the head of AI governance, organized a meeting in December 2022 for the responsible innovation group tasked with maintaining the AI principles. During the meeting, Gennai proposed that some concessions may be required to accelerate product releases. The company has established a scoring system for its products in various critical domains, assessing their preparedness for public launch. While certain categories, such as child safety, require engineers to attain a 100% clearance, Gennai advised the group that Google may not have the luxury of waiting for perfection in all areas. “On fairness, we might be at 80, 85 percent, or something to be enough for a product launch,” she said.
In February, an employee expressed concerns within an internal message group about the ineffectiveness of Bard, stating, “Please do not launch” due to contradictions in the AI tool’s responses and inaccurate information provided. This message was viewed by almost 7,000 people, with many agreeing that the tool posed factual inaccuracies.
The subsequent month, individuals familiar with the situation revealed that Gennai overrode a risk assessment submitted by her team that Bard was not yet prepared for public launch due to the potential harm it could cause. Soon after, Bard was launched for public use.
Via Bloomberg