Meta is offering its AI tech as an open source software, a move that rivals perceives as dangerous

Priya Walia

Meta

Looking for more info on AI, Bing Chat, Chat GPT, or Microsoft's Copilots? Check out our AI / Copilot page for the latest builds from all the channels, information on the program, links, and more!

Meta has made its latest A.I. technology available to the public to create personalized chatbots. However, competitors such as Google have expressed concerns about the potential risks associated with this approach.

The tech giant, formerly known as Facebook, announced the release of a new large language model called LLaMA, which is short for Large Language Model Meta AI, in February. Although the model has not equipped with conversational abilities like ChatGPT or Bing AI chatbot (for the time being), it will be accessible to researchers and organizations associated with the government, civil society, and academia under a non-commercial license. Following an approval process by the company, those who provide their email address to Meta will have the opportunity to download the code.

In short, Meta is offering its AI technology as open-source software, enabling outsiders to access all the necessary resources to develop their own chatbots quickly and efficiently.

Criticism from Google, OpenAI, and other entities has been directed towards Meta for its decision to pursue an unrestricted open-source approach, which has been regarded as a potentially hazardous course of action. Recent concerns surrounding the rapid development of AI have raised red flags about its potential ramifications, such as its potential to disrupt the job market if it is not implemented correctly. However, Meta declares that it sees no rationale for keeping its code under wraps, as reported by The New York Times.

Zoubin Ghahramani, a Google vice president of research responsible for overseeing AI work, has expressed concerns regarding the potential consequences of revealing details or open-sourcing code related to AI technology. The risk of misuse is a primary reason for this cautious approach.

However, Dr. LeCun disagrees, noting that the growing secrecy at Google and OpenAI is a “huge mistake” and a flawed perspective on the current situation. He argues that the public and governments will only embrace AI when it is not solely under the control of a few powerful entities, such as Google and Meta.

Large language models are a form of AI that can comprehend and process human language. These models have gained immense popularity in recent times, with industry giants such as OpenAI, Microsoft, and Google spearheading the development of some of the most extensive and powerful models globally.

Via New York Times