Prof Vincent Wade told the Future Human audience how researchers at Adapt are looking to shape the future with AI that offers privacy, transparency and control.
At Future Human 2020, Prof Vincent Wade presented a keynote on the evolution of artificial intelligence and, in particular, new trends in what he called “human-centric AI”.
Wade works at the forefront of emerging AI technologies as director of the Adapt centre for digital media research. This Science Foundation Ireland-supported research centre is hosted at Trinity College Dublin but incorporates the work of eight higher-education institutions.
“Today, we’re seeing digital media and AI really focusing on automation, whether it be self-driving cars, whether it be the increasing speed of processing, being able to access via cloud, or AI being able to be much more accurate in terms of its decision-making in very narrow domains,” said Wade.
“But we’re also seeing the not so good. We’re seeing the lack of control in AI. Because AI doesn’t actually understand things. What it tends to do, in data-driven AI, is it looks at the data and then makes decisions on that, but it can’t explain what it’s doing. It’s very difficult to control unless you’re actually controlling the data itself.”
The use of vast swathes of data underpinning artificial intelligence comes with its own issues, particularly in terms of taking responsibility in handling data, but also in ensuring that the data on which a decision-making system is based has been vetted. This problem was recently highlighted by University College Dublin researcher Abeba Birhane, who helped uncover how the much-cited ‘80 Million Tiny Images’ dataset may have contaminated AI systems with racist and misogynistic slurs.
“We talk about data lakes but actually a number of them have turned toxic because they can’t be used, because of that provenance problem, because of those issues of trust,” Wade told the virtual audience at Future Human.
He is also conscious of how powerful data-led tools can be used. “We’re seeing that personalisation is being used just to keep people attentive, just to keep people online, rather than necessarily actually empowering them,” he said. “In the most recent articles around The Social Dilemma, we’re seeing that our attention is what actually ends up being the product, and that’s not always a good thing.”
Towards a balanced digital society
All of this presents the reasoning why researchers at the Adapt centre are focused on achieving a “balanced digital society by 2030”, according to Wade.
The key issues of control, inclusion and accountability, he explained, impact all sectors. “It doesn’t matter what sector of industry you’re from – ICT, localisation, digital media, fintech, e-health, agri-food. They all are struggling with these.”
But looking optimistically towards near-future horizons, Wade sees an “evolution towards human-centric AI”. The drive for automation is accelerating the advance of these technologies, but in the context of a society that is becoming more privacy-conscious and more savvy about data governance and regulation, the shape it takes can be much different.
“From an AI research perspective, we’re looking at how we can actually do more AI with less data,” said Wade. This means finding ways to lead automated decision-making without the need for stagnant data lakes. It means presenting users with tools that treat them as informed consumers, and the actions of which can be explained. Furthermore, it means empowering users, not monetising them.
“A lot of the business is actually built on the user being the product, where we’re seeing that the services are free but the user data is being sold in the background,” Wade added.
“That is a model which has been around for nearly 20 years now but it’s beginning to evolve. People are beginning to say maybe there’s a better way, and it’s being opened to disruption. It’s not that we don’t advertise, but what we are saying is that the services themselves should become the product.”