AI, big tech and the ethics of prediction

24 May 2023

Image: © phonlamaiphoto/Stock.adobe.com

AI ethics researcher Nell Watson discusses the challenges of regulating an industry that is developing at ‘breakneck speed’.

“We have clearly entered a new era,” says Nell Watson, who researches artificial intelligence (AI) and describes herself as an “ethics scientist, AI philosopher and advocate”.

Watson first became interested in machine learning “as part of working to solve machine vision problems to enable body measurement using a camera”. She co-founded QuantaCorp, a mobile body-sizing platform that was acquired by BodiData in 2022.

When, in about 2014, new deep learning techniques enabled the company to train a system to isolate a person from a background in order to take measurements, Watson realised AI “was going to change the world”.

A selfie of Nell Watson. She has brown hair and is wearing a dark blue suit jacket and a red blouse.

Image: Nell Watson

However, it was also around this time that Watson became concerned about the ethics of AI. “Horribly biased and broken systems started emerging in recruitment and criminal justice,” she says. “The lack of transparency and auditability of these technologies presented a serious challenge, not only for individuals, but society at large.”

Watson says she has dedicated her work since then “to improving the state of play, helping to develop new standards and certifications for AI systems and organisations behind them”.

Ethical considerations

Though she describes herself as passionate about engineering, Watson is wary of the trust society puts in technology. “There is a tendency to trust systems too much, to take their impressions or predictions at face value, even when it may be based upon false predicates,” she warns.

“It’s very difficult to audit and debug systems for prejudicial treatment of people, especially as these systems can make inferences that humans cannot. For example, no human radiologist can meaningfully tell ethnicity by looking at bones, but an AI system can, from its ability to find patterns within patterns.”

In Watson’s view, AI systems are currently “too unreliable and untrustworthy to be used in potentially dangerous circumstances”. “It will take time to uncover the myriad ways in which these systems can go wrong, or be abused, and to create better rules around how they are to be used,” she asserts.

‘A Sputnik moment’

However, time is not something the AI-concerned have on their side. Watson describes the speed of innovation instigated by the Big Tech AI arms race as “a Sputnik moment”.

In this expensive competition for supremacy, regulation falls behind technological advancements. “It’s very difficult to create effective policy, especially in advance. It typically is done in the rear-view mirror once problems become obvious,” Watson explains. “However, a lot of damage can be done to society in the meantime.”

Last week, a leading figure in the AI race, OpenAI’s CEO Sam Altman, spoke to a US Senate subcommittee about the potential misuses of AI and called on lawmakers to regulate the industry. OpenAI owns ChatGPT, the generative AI chatbot that was released in November last year and instigated the current industry frenzy.

On the other side of the pond, the European Commission has come closer to implementing its AI Act which it claims will reign in “high-risk” AI activities.

Though clearly in favour of these steps towards regulation, Watson does not believe in too much oversight. “Balance and moderation are necessary for effective governance of AI,” she says.

The issue of governance gets trickier when you don’t have the staff to create and maintain standards. Watson is concerned about the Big Tech companies that have been laying off their ethics staff. “As an arms race heats up, it’s just the worst possible time,” she says.

“This will come back to bite them, as they realise that there is a huge gap between creating something that works in a lab, or as a toy, and something deployable in real-world conditions in ways that affect real people.”

For Watson, “ethics needs to become a dedicated department within organisations”. She sees this as an example of some of the jobs AI can create, though Watson agrees that “a lot of jobs will be disrupted and aren’t coming back”.

‘Creepy is the new normal’

In Watson’s view, the current “breakneck speed of AI development” means that “something could be designed, deployed and causing havoc within a matter of days, maybe hours, far too fast for anyone to meaningfully respond”.

Surprisingly, “a cyber 9/11” is not what Watson is most concerned about when it comes to AI. Rather, it is the “probability of being driven crazy by AI systems that can predict our emotions and generate content and interactions that nudge us in various ways”.

“Such demoralisation is an extremely effective tactic and is being deployed to try to pull societies down from the inside,” she argues.

The predictive abilities of AI machines, fed on vast amounts of data garnered from social media and other online activities, are improving all the time.

In his book I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique, Dr Tomas Chamorro-Premuzic argues that the combination of vast datasets and cheaper and faster AI technologies have brought about a “new economic order” of “automated insights and nudges” that can shape “human activity in commercially advantageous ways”.

“We are now well aware of what algorithms know or may know about ourselves and others; when it comes to AI, creepy is the new normal,” Chamorro-Premuzic says.

Watson sees this unchecked power of predictive AI as potentially “a greater threat to civilisation than rogue AI systems”. Though she’s keen to stress that the “threat of such systems merits very serious concern also”.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Rebecca Graham is production editor at Silicon Republic

editorial@siliconrepublic.com