Subscribe

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Service

New Report Pushes for AI Safety and Transparency- Fei-Fei Li

New Report Pushes for AI Safety and Transparency New Report Pushes for AI Safety and Transparency
IMAGE CREDITS: QUARTZ

A new report from a California-based policy group, co-led by AI pioneer Fei-Fei Li, urges lawmakers to consider potential AI risks that have yet to emerge when shaping regulatory policies.

The 41-page interim report, released on Tuesday, is the product of the Joint California Policy Working Group on AI Frontier Models. This initiative was launched by Governor Gavin Newsom after he vetoed the controversial AI safety bill, SB 1047. While Newsom found the bill inadequate, he acknowledged the need for a more comprehensive evaluation of AI risks to guide legislation.

Li collaborated with co-authors Jennifer Chayes, dean of UC Berkeley’s College of Computing, and Mariano-Florentino Cuéllar, president of the Carnegie Endowment for International Peace. Their report advocates for increased transparency in AI development, particularly from leading labs like OpenAI. Industry experts across different viewpoints reviewed the report, including AI safety advocate Yoshua Bengio and Databricks co-founder Ion Stoica, who opposed SB 1047.

According to the report, new AI-related risks may require regulations mandating that AI developers disclose safety tests, data acquisition practices, and security measures. It also calls for stronger third-party evaluation standards and enhanced whistleblower protections for AI employees and contractors.

While the report notes there is no conclusive evidence that AI could directly enable cyberattacks or biological weapons, it stresses the importance of proactive policymaking. The authors argue that lawmakers don’t need to wait for catastrophic events to justify regulations—drawing a parallel to nuclear weapons, which were recognized as a significant threat before ever being used.

The report proposes a “trust but verify” approach to AI governance. It suggests that AI developers be required to submit internal safety reports while allowing third-party audits to verify their claims.

Although the final report isn’t due until June 2025 and does not endorse specific legislation, it has been welcomed by both supporters and critics of AI regulation. Dean Ball, an AI researcher at George Mason University, described it as a positive step for California’s AI safety efforts. State Senator Scott Wiener, who introduced SB 1047, praised the report for advancing critical discussions on AI governance.

The recommendations align with elements of SB 1047 and its successor, SB 53, particularly in pushing for AI developers to disclose safety test results. More broadly, the report represents a significant step forward in AI policy discussions, reinforcing the urgency of responsible AI governance.

Share with others