Y Combinator rallies start-ups against California’s AI safety bill

24 Jun 2024

Image: © Shuo/Stock.adobe.com

Y Combinator claims upcoming regulation in California could hamper the growth of open-source AI and ‘undermine competition’.

Silicon Valley-based accelerator Y Combinator and a host of AI start-ups have signed an open letter opposing plans to regulate the sector in California.

The company has spoken out against the state’s Senate Bill 1047, which aims to ensure the safe development of AI systems by putting more responsibilities on AI developers. The bill would force developers of large “frontier” AI models to take precautions such as safety testing, implementing safeguards to prevent misuse and post-deployment monitoring.

This bill was amended last week as a way to balance out concerns from companies that the rules would impact innovation. But in a recent letter shared by Politico, Y Combinator and various start-ups claim there are still issues with the proposed regulation and that it could “inadvertently threaten the vibrancy of California’s technology economy and undermine competition”.

Y Combinator argues in the letter that the responsibility for the misuse of large language models should rest “with those who abuse these tools, not with the developers who create them”. Politico reports that roughly 140 AI start-ups also signed the letter.

“Developers often cannot predict all possible applications of their models, and holding them liable for unintended misuse could stifle innovation and discourage investment in AI research,” the letter reads. “Furthermore, creating a penalty of perjury would mean that AI software developers could go to jail simply for failing to anticipate misuse of their software – a standard of product liability no other product in the world suffers from.”

The letter also criticises other aspects of the bill, such as the proposed requirement for a ‘kill switch’ to quickly deactivate AI models. Y Combinator claimed this proposal would impact the development of open-source AI.

“We believe a more balanced approach is necessary – one that protects society from potential harm while fostering an environment conducive to technological advancement that is not more burdensome than other technologies have previously enjoyed,” the letter reads. “Open-source AI, in particular, plays a critical role in democratising access to cutting-edge technology and enabling a diverse range of contributors to drive progress.”

AI has become a big topic in both the tech sector and for governments, as countries are scrambling to regulate this rapidly growing sector.

The EU approved the AI Act earlier this year, which is regarded as some of the most detailed AI regulation in the world.

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Leigh Mc Gowran is a journalist with Silicon Republic