US lawmakers send a letter to OpenAI requesting government access
Senate Democrats and one independent lawmaker have sent a letter to OpenAI CEO Sam Altman regarding the company’s safety standards and employment practices toward whistleblowers.
Perhaps the most significant portion of the letter , first obtained by The Washington Post, was item 9, which read, “Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?”
The letter outlined 11 additional points to be addressed, including securing OpenAI’s commitment to dedicating 20% of its computing power to fuel safety research and protocols to prevent a malicious actor or foreign adversary from stealing AI products from OpenAI.
First page of the letter from Senate Democrats. Source: The Washington PostRegulatory scrutiny
Although regulatory scrutiny is nothing new for OpenAI and the encompassing artificial intelligence sector, the letter from Democrat lawmakers was prompted by whistleblower reports of lax safety standards for GPT-4 Omni to ensure the market release of the product was not delayed.
Related: Synchron, ChatGPT to help paralyzed patients chat and text again
OpenAI whistleblowers also claimed that efforts to bring safety concerns to management were met with retaliation and allegedly illegal non-disclosure agreements, driving the whistleblowers to file a complaint with the US Securities and Exchange Commission in June 2024.
Shortly after, in July, tech giants Microsoft and Apple renounced their memberships for OpenAI’s board due to increased regulatory scrutiny. The decision not to take up a board seat came despite Microsoft’s $13-billion investment in OpenAI in 2023.
Existential fears persist
Former OpenAI employee William Saunders recently revealed that he quit the company because he had felt the ongoing research at OpenAI might pose an existential threat to humanity , likening the potential trajectory of OpenAI to the infamous crash of the RMS Titanic in 1912.
Saunders clarified that he was not concerned about the current iteration of OpenAI’s ChatGPT large language model but was more concerned with future versions of ChatGPT and the potential development of advanced superhuman intelligence.
The whistleblower argued that employees working within the artificial intelligence sector have a right to warn the public about potentially dangerous capabilities exhibited by the rapid development of synthetic intelligence.
Magazine: AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
Poll: Harris slightly ahead of Trump in North Carolina, Nevada and Wisconsin
This week, the U.S. Bitcoin spot ETF had a cumulative net inflow of US$2.2202 billion
The Absinthe Forger: A Tale of Fraud and the Fascinating World of Absinthe
Microsoft officially announced that it will end support for Windows 10 on October 14, 2025