OpenAI’s opposition to a Bill passing through the California Congress has been in turn criticised by Senator Scott Wiener, the key proponent of the legislation.
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, known as SB 1047, was introduced by Weiner and subsequently passed through the State Senate in May.
The legislation will require AI developers to ensure products can be easily shut down, and that ‘reasonable care’ will be taken when developing said solutions. OpenAI, one of the biggest AI firms globally, claims that the laws could hold back the ‘AI revolution’.
The company – widely known for being the operator of the ChatGPT AI chatbot – wrote a letter to the California Senate arguing that AI legislation should be left to the US Congress while also raising national security concerns. This letter was subsequently obtained by Bloomberg.
Responding to OpenAI’s letter, Wiener stated that he agrees that in an ideal world AI regulation should be left to Congress. However, he and the Bill’s proponents do not believe Congress has acted enough, and are sceptical that it ever will.
He also took aim at the national security concerns of OpenAI’s letter, citing that SB 1047 has been publicly supported by John (Jack) Shanahan, a retired Lieutenant General in the US Air Force, and Hon. Andrew C. Weber, former Assistant Secretary of Defense for Nuclear, Chemical & Biological Defense Programmes.
SB 1047’s key provisions relating to national security are that AI developers should ensure that the solutions they create cannot be subsequently used for weapons purposes, such as for the development of weapons that could jeopardise US national security.
“I’m optimistic about this balanced legislative proposal that picks up where the White House Executive Order on AI left off,” Lt Gen Shanahan remarked.
“SB 1047 addresses at the state level the most dangerous, near-term potential risks to civil society and national security in practical and feasible ways. It thoughtfully navigates the serious risks that AI poses to both civil society and national security, offering pragmatic solutions.
“It lays the groundwork for further dialogue among the tech industry and federal, state, and local governments, aiming for a harmonious balance between fostering American innovation and protecting the country.”
OpenAI is not the only firm in California concerned about SB 1047 and its implications, however. Various tech firms have expressed worries about and opposition to the legislation. A particular provision which has worried some is the prospect of AI firms facing state-issued penalties should SB 1047’s requirements be breached.
As California is a hub for AI and tech development, opponents of the legislation claim that some of its stricter requirements could drive firms out of the state, depleting an industry which plays a key role in its economy.
Senator Wiener is undeterred by this, however. In his response, he noted that OpenAi hasn’t criticised ‘a single provision of the bill’, and has acknowledged that the legislation’s specific core ideals of prompting safety are ‘reasonable and implementable’.
Given that OpenAI, along with some other major tech firms involved in AI’s development, has openly committed to a responsible future for the technology, it could be assumed that the Bill’s core principles are already being met by California’s tech sector.
Wiener summarised: “Bottom line: SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.
“We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about foreseeable AI risks, and it deserves to be enacted.”