Experts Criticize OpenAI's Decision to Keep GPT-4 Closed

4 min read OpenAI has released GPT-4, but its decision to keep the model closed has disappointed many experts who criticize the move for going against the company's founding ethos. Concerns have been raised about the potential risks of AI systems, and the pressure to balance stakeholders' interests may require third-party regulators. March 16, 2023 14:38 Experts Criticize OpenAI's Decision to Keep GPT-4 Closed

OpenAI has announced the release of its highly-anticipated next-generation AI language model, GPT-4. However, the news has been met with disappointment by experts and researchers who have noted that, despite the company's name, the model is not open source. While OpenAI has shared benchmark and test results, as well as demos of the technology, it has not disclosed any information on the data used to train the model, its energy costs, or the specific hardware or methods used to create it. This has caused many in the AI community to criticize the move, as it goes against the organization's founding ethos as a research organization and makes it difficult for others to replicate its work.

The lack of transparency around GPT-4 has sparked concerns about the risks posed by AI systems like it. With increasing tensions and rapid progress in the field, some experts argue that it is difficult to develop safeguards against potential threats. Notably, GPT-4's closed model has received negative reactions from the AI community, coming just weeks after another AI language model developed by Facebook owner Meta, named LLaMA, leaked online, triggering similar discussions about the threats and benefits of open-source research.

Despite the backlash, OpenAI's Chief Scientist and Co-Founder, Ilya Sutskever, has defended the company's decision to keep the model closed, citing fears of competition and safety. While the closed approach is a marked change for OpenAI, which was founded as a nonprofit in 2015 with the aim to "build value for everyone rather than shareholders" and "freely collaborate" with others in the field, it later became a "capped profit" to secure billions in investment, primarily from Microsoft, with whom it now has exclusive business licenses.

The discussion around sharing research comes at a time of frenetic change for the AI world, with pressure building on multiple fronts. Tech giants like Google and Microsoft are rushing to add AI features to their products, often sidelining previous ethical concerns. On the research side, the technology itself is seemingly improving rapidly, sparking fears that AI is becoming a serious and imminent threat. Balancing these various pressures presents a serious governance challenge, which may require third-party regulators.

In summary, the release of GPT-4 has sparked concerns about the risks posed by AI systems, and the lack of transparency around its development has been met with disappointment by experts and researchers. As the field of AI continues to rapidly progress, there is a need to balance the various pressures from different stakeholders and ensure that appropriate safeguards are in place.

User Comments (0)

Add Comment
We'll never share your email with anyone else.