Google, Microsoft, and more form ‘Frontier Model Forum’ for safe generative AI development

Devesh Beri

Google Microsoft Openai

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

As reported by ComputerWorld; Google, OpenAI, Microsoft, and Anthropic are the founding members of the Frontier Model Forum, which promotes responsible development and safety research in the generative AI industry, as per this post by OpenAI. The Forum will offer membership to organizations working on large-scale generative AI tools and platforms, focusing on ensuring safety, best practices, public policy, and beneficial use cases for society.

The Forum’s immediate goals include:

  • Identifying best practices to mitigate potential risks associated with generative AI.
  • Coordinating scientific research on safety measures.
  • Facilitating communication between corporations and governments to enhance safety and development.

The rapid growth of generative AI has sparked debates about its governance and regulation. The Frontier Model Forum represents the industry’s effort to regulate itself instead of waiting for government regulations. The technology’s legal issues have also come to the forefront. OpenAI is facing lawsuits related to using copyrighted material and is being investigated by the US FTC regarding its customer data handling.

The Forum aims to bring together AI companies to advance thoughtful and adaptable safety practices to responsibly ensure powerful AI tools benefit humanity. By uniting leading players in the industry, the Forum seeks to address concerns, foster collaboration, and improve safety standards in the fast-evolving field of generative AI.