‘Scary times’: Building safer tools in the wild west of AI

5 Mar 2025

Image: © rexandpan/Stock.adobe.com

The horse may have bolted, but that just means we have to ride out and catch it, says Dr Elizabeth Farries, whose new project aims to create policies for safer AI development.

“These new technologies that are supposed to make our lives easier, we were discovering were harming human rights,” says Dr Elizabeth Farries, explaining what led her to researching frameworks and policies for safe artificial intelligence (AI).

Farries came a bit later to academia. She was a lawyer in Canada, working mainly on intellectual property law, but also doing some criminal defence and some human rights and indigenous rights work.

She moved to Ireland and did a PhD in law at Trinity College Dublin. At this time, she joined an international network of civil liberties organisations that came together, she says, because of “shared human rights concerns attached to this explosion of new technologies that wasn’t adequately constrained by government”.

Farries is now co-director, with Prof Eugenia Siapera, of the University College Dublin (UCD) Centre for Digital Policy. Last month, the centre began work on a new €3m EU-funded research project to explore the benefits and risks of AI from a whole-of-society perspective.

The ethos of the centre, Farries says, is that “policy should reflect primarily the understandings, perspectives and experiences of those impacted by new technologies”.

A photo of Elizabeth Farries holding a laptop up in front a blue-lighted background.

Image: Elizabeth Farries

The best way to do this, she says, is through engaging with all impacted stakeholders. The centre’s advisory board includes people from what Farries calls the four cornerstone groups for digital policymaking: government, academia, civil society and industry. As well as this diversity of stakeholders, the centre’s academic staff includes sociologists, lawyers, computer scientists, linguists and members from various other disciplines, allowing for an interdisciplinary approach to research.

“STEM research is valuable, but it’s not the whole picture,” Farries says.

“And we shouldn’t be making research, development, marketing or regulatory decisions based on the limited world views of a small group of people.”

The new research project, FORSEE (Forging Successful AI Applications for European Economy and Society), which Farries is leading, is a collaboration among eight universities, research institutions and think-tanks across six European countries.

The overall aim of the project is to “develop this new approach to AI governance that guarantees more successful AI applications for society as a whole”, Farries says.

‘AI is supposed to make our lives better’

Since the 1950s, there have been explosions followed by lulls in the development of what we understand collectively as AI, Farries says. “And I would say right now we’re in a period of explosion.

“There’s a lot happening and there’s a lot of regulatory concern attached to it.

“We’re not in the age of, you know, robots with independent intelligence.

“That’s not where we are. We’re not in the place of moral panics that we see in sci-fi shows.

“But nonetheless, AI is being developed, and these are sites of negotiation and they’re sites of contestation.”

There are so many areas where AI is now having an impact, Farries says, and this needs to be fully considered at a policy level.

Where AI is incredibly useful for cancer detection, the automation of health delivery could have negative effects on doctor-patient relationships.

Where AI is good at surveilling people from a policing perspective, its training biases mean that already marginalised groups of people are unfairly targeted.

Wherever AI can solve problems, it requires huge computing power and its data centres are energy intensive, creating challenges, for example, for the electricity grid in Ireland, and contributing to a growth in carbon emissions during a time of climate crisis.

“Under all these questions is this broader goal that we have seen since the 1950s,” Farries says, “is this idea that AI is supposed to heighten the wellbeing of everyone in society. That’s the point, right?

“We’re supposed to have more fulfilled lives because AI makes things easier. And so, in short, the question of the project is, how do we ensure that AI is successful? How do we understand what that means?

“And not just from indicators that are about the economy, right, but for society as a whole. How do we measure that? And what are the conditions that make this so-called successful AI possible?”

Foreseeing success

Farries says we don’t currently have a holistic understanding of success in terms of AI. There are, she says, many groups of people that just don’t have a voice in current policy discussions, but they could have a “valuable input into the understandings of successful AI”.

One of the groups she mentions is small and medium enterprises (SMEs). Among the partners on the project is the European Digital SME Alliance, which is based in Belgium and represents more than 45,000 companies.

Other partners include Trinity College Dublin, Ireland’s TASC think-tank for social action, the WZB Berlin Social Science Centre, and universities and organisations in France, Finland and the Netherlands.

“Let’s understand what everyone thinks about it in order to secure those lofty goals about making things better for everyone,” Farries says.

“And to do that, we want to include this new evaluative framework for assessing current and future applications according to this updated understanding of success and a new sort of prototype for registering risks and negative impacts, again through this whole society understanding.”

The idea is this framework will enhance the capabilities of not just policymakers but also other stakeholders to address the risks and opportunities associated with emerging technologies as they come down the line, she says.

Leading a major project such as this must have its challenges, and with just three years to complete it, there’s a lot to do. Farries’ main concern as the project kicks off is to make sure that everyone involved gels well. “They’re a really good crew,” she says.

“They’re just generally perceptive and progressive, so I’m really grateful to be working with them.”

‘Scary times’

The worrying trends – seen particularly in the US – of a rollback on regulations and the displacing of democratic structures makes any project that aims to balance innovation with regulation and concern for citizens’ rights particularly timely and potentially contentious.

“Scary times for all of us,” Farries says, though she doesn’t see these challenges as a reason for cynicism. “There’s a real opportunity for Europe in particular to do things differently now,” she says. Specifically, she sees the EU as having the chance to develop strong regulations but emphasises that these must be independent and balanced.

“The AI Act is, I mean, it’s to be commended in terms of its outcomes.

“Some people think the risk register is insufficient for managing this new space [but] honestly, it remains to be seen.”

And how about the fact that AI innovation is happening at superfast speeds due to major public and private investment. Can law keep up with tech? And will there be buy-in from countries to follow through with regulations?

It’s possible the AI Act will follow a similar pattern to the GDPR, Farries says.

“That’s an incredible piece of regulatory work, but the data protection authorities have been critiqued for not being able to enforce it, primarily because they’re outstrapped economically by the people that they’re trying to enforce that regulation against [Big Tech]”.

As to whether regulation can keep up, Farries says she gets asked this question a lot.

“‘Hasn’t the horse already bolted?’ … Yes, it has.

“And that’s why we have these opportunities because from my perspective, sure, the horse has bolted, which means you, you know, ride across the prairie, loop him in, get him back safely into the barn, and then everyone goes away for dinner.

“You know, to me, that’s not game over.”

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.

Rebecca Graham is production editor at Silicon Republic

editorial@siliconrepublic.com