Skip to content
OnMSFT.com
  • Home
  • About
  • Contact
  • Windows
  • Surface
  • Xbox
  • How-To
  • OnPodcast
  • Edge
  • Teams
  • Gaming
Menu
  • Home
  • About
  • Contact
  • Windows
  • Surface
  • Xbox
  • How-To
  • OnPodcast
  • Edge
  • Teams
  • Gaming
  1. Home
  2. News
  3. Is Google Bard a pathological liar?

Is Google Bard a pathological liar?

OnMSFT Staff OnMSFT Staff
April 19, 2023
3 min read

As per a report from Bloomberg, numerous current and past Google staffers expressed severe skepticism regarding the company’s chatbot Bard through internal communications. The chatbot was denounced as “a pathological liar,” and employees urged the company to abstain from introducing it. These statements were based on conversations held with 18 individuals associated with Google and screenshots of internal messages.

During internal discussions, a team member observed Bard’s tendency to offer perilous guidance to users, including on matters such as aircraft landing and scuba diving. Another participant expressed a negative opinion of Bard, stating, “Bard is more harmful than helpful. It would be prudent not to proceed with the launch.” In addition, Bloomberg reported that the organization ignored a risk assessment put forth by an internal safety unit, which declared that the technology was not prepared for widespread usage.

According to a report by Bloomberg, Google has seemingly deprioritized ethical considerations to maintain its competitive edge against rivals such as Microsoft and OpenAI. While the company frequently highlights its focus on safety and ethics in AI, it has faced longstanding criticism for emphasizing profits over ethical concerns.

Brian Gabriel, a representative for Google, asserts that the company continues to give great importance to AI ethics as this remains a top priority for Google, as per Bloomberg. “We are continuing to invest in the teams that work on applying our AI Principles to our technology,” said the spokesperson.

As per recent statements from current and former employees, the team in charge of ethical considerations at Google has reportedly been left disempowered and demoralized. It has been alleged that the individuals responsible for assessing the safety and ethical implications of upcoming products have been instructed not to interfere with the development of generative AI, despite potential risks associated with this cutting-edge technology.

Google is striving to modernize its search business through state-of-the-art technology, potentially incorporating generative AI in smartphones and homes worldwide – to pre-empt the initiatives of competitors like OpenAI, backed by Microsoft.

Meredith Whittaker, the president of the Signal Foundation, an organization supporting private messaging, has voiced her concern that AI ethics has been pushed to the back of the agenda. Whittaker, who formerly worked for Google, expressed disappointment that ethical considerations appear to have taken a back seat. “If ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work,” she said.

Jen Gennai, the head of AI governance, organized a meeting in December 2022 for the responsible innovation group tasked with maintaining the AI principles. During the meeting, Gennai proposed that some concessions may be required to accelerate product releases. The company has established a scoring system for its products in various critical domains, assessing their preparedness for public launch. While certain categories, such as child safety, require engineers to attain a 100% clearance, Gennai advised the group that Google may not have the luxury of waiting for perfection in all areas. “On fairness, we might be at 80, 85 percent, or something to be enough for a product launch,” she said.

In February, an employee expressed concerns within an internal message group about the ineffectiveness of Bard, stating, “Please do not launch” due to contradictions in the AI tool’s responses and inaccurate information provided. This message was viewed by almost 7,000 people, with many agreeing that the tool posed factual inaccuracies.

The subsequent month, individuals familiar with the situation revealed that Gennai overrode a risk assessment submitted by her team that Bard was not yet prepared for public launch due to the potential harm it could cause. Soon after, Bard was launched for public use.

Via Bloomberg

Check out more from OnMSFT.com!

Share this article:
Previous Article Windows 11 Insiders get two new builds – 23440 for the Dev Channel and 25346 for the Canary Channel Next Article Microsoft set to drop Twitter from its ad platform

Related Articles

YouTube App Shows Ads That Won’t Close During Fullscreen Videos

March 4, 2026
tiktok

TikTok Confirms DMs Will Not Get End-to-End Encryption

March 4, 2026
GPT 5.3

OpenAI GPT-5.3 Instant released, when will you get it, and benchmarks

March 3, 2026

Leave a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Recent Posts

  • YouTube App Shows Ads That Won’t Close During Fullscreen Videos
  • TikTok Confirms DMs Will Not Get End-to-End Encryption
  • OpenAI GPT-5.3 Instant released, when will you get it, and benchmarks
  • MSFT stock price today: Microsoft shares rise, investors watch AI execution
  • Claude is down, but Gemini 3.1 Flash-Lite is rolling out right now

Recent Comments

No comments to show.
OnMSFT.com

OnMSFT.com covers Microsoft news, reviews, and how-to guides. Formerly known as WinBeta, we have been your source for Microsoft news since 1998.

Categories

  • Windows
  • Surface
  • Xbox
  • How-To
  • OnPodcast
  • Gaming
  • Edge
  • Teams

Recent Posts

  • YouTube App Shows Ads That Won’t Close During Fullscreen Videos
  • TikTok Confirms DMs Will Not Get End-to-End Encryption
  • OpenAI GPT-5.3 Instant released, when will you get it, and benchmarks
  • MSFT stock price today: Microsoft shares rise, investors watch AI execution
  • Claude is down, but Gemini 3.1 Flash-Lite is rolling out right now

Quick Links

  • About OnMSFT.com
  • Contact OnMSFT
  • Join Our Team
  • Privacy Policy
© 2010–2026 OnMSFT.com LLC. All rights reserved.
About OnMSFT.comContact OnMSFTPrivacy Policy