France, Germany and Italy want Big Tech companies to self-regulate their foundation AI models, but SMEs claim this would shift responsibility to smaller businesses.
Potential changes to the EU’s AI Act are facing scrutiny by European SMEs, which are concerned that the regulation could impact barriers to entry for smaller businesses in this sector.
The issue relates to how the AI Act will regulate foundation models, which are large machine learning systems that power products such as OpenAI’s ChatGPT and Google’s Bard.
France, Germany and Italy have raised issue with how the act will regulate these models due to concerns that it could impact innovation. The countries have proposed that the companies behind these systems self-regulate their own models through company pledges and codes of conduct, according to a joint paper seen by Politico last month.
The European Digital SME Alliance, which represents more than 45,000 small and medium enterprises across Europe, claim that this form of self-regulation would shift the “responsibility of compliance” to the downstream deployers of these AI systems, such as SMEs.
The alliance argues that providers of large foundation models should undergo third-party conformity assessments to ensure that they are following the rules in the AI Act.
“In this way, the regulation would guarantee that SME downstream developers are not overburdened with heavy compliance costs and therefore the barrier to enter the market for SMEs would be lowered,” the organisation said in a statement.
The alliance said a stricter regulatory regime against larger companies is needed to ensure that the SMEs that use these models have “legal certainty”. The organisation said this form of regulation would ensure that compliance is handled “at the source, where most risks for fundamental rights and safety stem from”.
“Watering down obligations for very large foundation models in the AI value chain will result in either the inability of SMEs to comply with the AI Act or in a barrier to innovation for SMEs,” the alliance said. “Clear and strict obligations are needed to address the power imbalances in the market and ensure that innovative companies can thrive.”
Amnesty International also raised issue with the proposal by the three European countries last month and argued that it is putting the adoption of the AI Act at risk.
“The EU has an opportunity to demonstrate international leadership by adopting this robust, landmark regulation aimed at protecting fundamental rights and mitigating the multiple risks of AI technologies,” said Amnesty International secretary general Agnes Callamard.
“The EU must not falter at this final hurdle and EU member states, such as France, Germany and Italy, must not undermine the AI Act by bowing to the tech industry’s claims that adoption of the AI Act will lead to heavy-handed regulation that would curb innovation.”
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.