Google has alerted its employees about the potential risks associated with AI chatbots

Priya Walia


Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

As per the leading news outlet Reuters, Google is reminding its staff of the appropriate use of chatbots, including its popular AI program Bard. Simultaneously, the technology giant is promoting the tool worldwide.

Various reliable sources have indicated that Alphabet, Google’s parent company, has reiterated its longstanding policy on preserving confidential information by instructing employees not to disclose classified data to AI chatbots. The company has also verified the authenticity of these reports to Reuters.

In addition, Google has advised its engineers to refrain from utilizing computer code directly generated by chatbots. According to Reuters, the company stated that though Bard may provide inappropriate code recommendations, it still assists programmers in their work.

AI systems such as ChatGPT utilize neural networks, which are complex mathematical systems capable of acquiring new skills through data analysis. In recent years, major players in the tech industry, such as Google and OpenAI, have harnessed the power of neural networks to enable machine learning through large-scale analysis of digital text obtained from the internet.

By identifying patterns in this voluminous data, these networks could learn how to generate writing independently, from news articles and poems to computer programs and even humanlike conversations. This innovation has led to the creation of state-of-the-art chatbots like ChatGPT. However, as these networks are designed to learn from quantities of data too vast for even their creators to comprehend, they sometimes display unexpected behavior.

Some industry experts express concern that further advancements in these networks may intensify bad habit acquisition as these systems continue to be trained on larger data sets.

An open letter signed by numerous influential figures last month cautioned about the possibility of AI eventually destroying humanity. The letter called for the mitigation of this existential risk to be treated as a global priority, much like preventing pandemics and nuclear war.

Via Reuters