key points and reactions from California proposed AI bill

Introduction

The AI industry continues to generate buzz across the globe with innovations and breakthroughs. However, there has been fears that AI has the potential to cause irreparable damage too. People that are calling for regulation of AI technology keeps on growing. The US state of California has thus taken the issue to another level by proposing a legislation to govern the development of AI products. This bill is called SB-1047. If you want to see the entire content of the bill, click here

Right now, SB-1047 has gotten the approval of California state senate. The next stage now is to pass at the state house of Assemble. This AI bill contains declarations that AI product model builders must abide by if their product are to be used in the state. With the release of the bill definitely come lots of reactions from the tech community and the world at large. In this discussion, we will look at key components of the bill and what the tech community thinks about it.

Building AI models and taking responsibility

The main aim of the AI bill by California state is to make developers or builders of AI products to be responsible for any harm that may arise from their invention. AI has been alleged to endanger privacy and sometimes produce misleading results that could lead to enormous physical harm or economic sabotage. So here are 2 key take away of the bill and their implication:

  • Third-party audits: One of the provisions of the AI bill is that there should be periodic audits of AI models and products to ensure that they are operating as designed and in compliance with the bill.

The auditor appointed to make check the model should not be part of the development team. It should be a stand-alone auditing firm that will be able to compare the AI model with stated design and operational features with the regulations of SB-1047. Below is an extract from the proposed bill on this aspect:

Beginning January 1, 2026, a developer of a covered model shall annually retain a third-party auditor that conducts audits consistent with best practices for auditors to perform an independent audit of compliance with the requirements of this section. source

This aspect of the bill was meant to ensure that AI models operate within the boundaries of their design. It means that the product would not be further modified to perform work other than what it was designed to do. In addition, this part of the bill aims at making sure that the AI does not become harmful along the way and create any sort of issues for users.

While performing audits are not bad on their own, many fear that such audits and strict performance requirements would simply stifle further upgrade and development of the model. For example if the developer noted some weakness and would like to alter some features of the model, the bill would prevent them from doing so. Instead, the developer would be required to register the modified AI as a new model.

  • Report incidents to authorities: This aspect of the bill requires AI model developers to promptly let the authorities know about any problems or dangers that occurred when people use the model. They are expected to report any such issues not later than 3 days after they found out about them. You can see below how it was clearly stated in the proposed bill.

A developer of a covered model shall report each artificial intelligence safety incident affecting the covered model, or any covered model derivatives controlled by the developer, to the Attorney General within 72 hours of the developer learning of the artificial intelligence safety incident. source

Once any issues are reported, the Attorney General would have to decide what should be done about the matter after also checking with the most current audit report. The bill grant powers to the Attorney General to if necessary direct that the AI model should be taken out of action. Also, he might consider taking legal action against the AI developers if some form of negligence is discovered from his findings.

While this aspect of the bill looks great as it makes developers take more responsibility for their work, it might also prevent innovation. If a developer is not completely sure how his AI model will operate in real time, he might out of fear of this bill decide not to take chances or venture into the unknown. As a result, it will prevent builders from venturing into unknown territory and thus harm the amount of innovation that would have happened in the AI industry.

There are many other aspects of the AI bill which are not mentioned here. For example, another aspect of it requires builders to include a self-destruct feature in the AI. This feature would make it possible to instantly kill the AI and remove it from existence if the authorities believe that there is sufficient reason to do so.

The tech community reacts

Lots of well-know people in the tech space have made their voices heard about the bill. While some are for it, many have shared big concerns about the requirements. The biggest fears is that innovation of the AI sector would be limited with so much liability placed on AI builders. Many will simply not attempt to create something for fear of being made responsible for any issues that users might have when using the model.

One of the very surprising reactions to the bill came from Elon Musk the owner of X (formerly Twitter). While some in the tech sector like Google and Meta disagreed with the Bill, Musk has thrown his support for it. He called on the state to pass the bill as welcomed development to the sector. Musk shared his sentiments using his official X handle. Many will disagree with him, but he has always been controversial, hasn't he?

What do you think about the issue. Should AI be regulated? Share your opinions in the comment section


note: thumbnail is from pixabay

Posted Using InLeo Alpha



0
0
0.000
1 comments
avatar

This post has been manually curated by @bhattg from Indiaunited community. Join us on our Discord Server.

Do you know that you can earn a passive income by delegating your Leo power to @india-leo account? We share 100 % of the curation rewards with the delegators.

100% of the rewards from this comment goes to the curator for their manual curation efforts. Please encourage the curator @bhattg by upvoting this comment and support the community by voting the posts made by @indiaunited.

0
0
0.000