Senate Democrats and one independent lawmaker have sent a letter to OpenAI CEO Sam Altman regarding the company’s safety standards and employment practices toward whistleblowers.

Perhaps the most significant portion of the letter , first obtained by The Washington Post, was item 9, which read, “Will OpenAI commit to making its next foundation model available to U.S. Government agencies for pre-deployment testing, review, analysis, and assessment?”

The letter outlined 11 additional points to be addressed, including securing OpenAI’s commitment to dedicating 20% of its computing power to fuel safety research and protocols to prevent a malicious actor or foreign adversary from stealing AI products from OpenAI.

First page of the letter from Senate Democrats. Source: The Washington Post

Regulatory scrutiny

Although regulatory scrutiny is nothing new for OpenAI and the encompassing artificial intelligence sector, the letter from Democrat lawmakers was prompted by whistleblower reports of lax safety standards for GPT-4 Omni to ensure the market release of the product was not delayed.

Related:  Synchron, ChatGPT to help paralyzed patients chat and text again

OpenAI whistleblowers also claimed that efforts to bring safety concerns to management were met with retaliation and allegedly illegal non-disclosure agreements, driving the whistleblowers to  file a complaint with the US Securities and Exchange Commission in June 2024.

Shortly after, in July, tech giants Microsoft and Apple  renounced their memberships for OpenAI’s board due to increased regulatory scrutiny. The decision not to take up a board seat came despite Microsoft’s $13-billion investment in OpenAI in 2023.

Existential fears persist

Former OpenAI employee William Saunders recently revealed that he quit the company because he had felt the ongoing research at OpenAI might pose an  existential threat to humanity , likening the potential trajectory of OpenAI to the infamous crash of the RMS Titanic in 1912.

Saunders clarified that he was not concerned about the current iteration of OpenAI’s ChatGPT large language model but was more concerned with future versions of ChatGPT and the potential development of advanced superhuman intelligence.

The whistleblower argued that employees working within the artificial intelligence sector have a right to warn the public about potentially dangerous capabilities exhibited by the rapid development of synthetic intelligence.

Magazine:  AI Eye: Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns