Academics can be ‘bridge’ to ethical development, says AI expert

28 Nov 2023

Image: Anastasia Griva

Dr Anastasia Griva wants to see more positivity in discussions about the potential of AI.

According to Gabriela Ramos, “It is clear that ensuring ethical AI is everybody’s business”. Ramos is UNESCO’s assistant director-general of Social and Human Sciences. She wrote these words in her introduction to the organisation’s Recommendation on the Ethics of Artificial Intelligence.

“AI technology brings major benefits in many areas, but without the ethical guardrails, it risks reproducing real-world biases and discrimination, fueling divisions and threatening fundamental human rights and freedoms,” Ramos said.

Earlier this week, 18 countries, including the UK and US, published guidelines for the secure development of AI. According to the guidelines, “AI systems have the potential to bring many benefits to society. However, for the opportunities of AI to be fully realised, it must be developed, deployed and operated in a secure and responsible way”.

University of Galway’s Dr Anastasia Griva is an expert in AI and analytics. She recently spoke to SiliconRepublic.com about what she describes as “a spectrum of ethical concerns” related to the development of AI.

“These encompass issues ranging from job displacement and embedded bias due to the data used to train generative AI (GAI), to concerns about transparency and fairness in AI-related decisions, data ownership, intellectual property and the potential for GAI to be used in generating persuasive content such as deepfake videos and personalised misinformation,” Griva explained.

Collaboration is key

For Griva, universities and the scientific community can and should play a key role in ensuring the ethical development of AI systems. “Partnerships between academics, industry and public policymakers are important for responsible GAI development.”

Griva thinks that collaboration and partnerships are necessary to “harness the benefits of innovation while safeguarding against potential risks and ensuring that AI aligns with societal values and priorities”.

Griva notes that policymakers create the rules and regulations, but “academics act as a bridge between policymakers and businesses, conducting research and finding the best ways to create ethical GAI standards and practices.

“Together, they form the backbone of how we make GAI that’s safe and respectful of everyone.”

UNESCO also highlighted the value of education and research in its recommendations and urged countries to “encourage research initiatives on ethical AI”.

In its “human-rights approach to AI”, UNESCO calls for all AI actors to avoid and address “unwanted harms (safety risks) as well as vulnerabilities to attack (security risks)” at all stages of the AI lifecycle, “ranging from research, design and development to deployment and use”.

The rise of disinformation

As AI models become increasingly sophisticated, their ability to create convincing content also increases. This content can be used to spread mis- and disinformation.

In a recently published paper, researchers argue that academics can help to tackle AI-generated disinformation. Because common forms of authentication such as checking if a photograph looks realistic or if an email is well-written now no longer work, this paper suggests that further research is needed to develop tools and techniques to verify information and to educate people on how AI models work and what distinct characteristics to look out for in their outputs.

“Impactful research questions seek to identify cues that are transparent and difficult to game with generative AI, to understand the effectiveness of behavioural interventions aimed at mitigating AI-generated disinformation, and to prepare new media literacy training that is tailored to the upcoming challenges of the generative AI era,” the researchers state.

For AI researcher Nell Watson, people have a tendency to trust AI outputs too much. “There is a tendency to trust systems too much, to take their impressions or predictions at face value, even when it may be based upon false predicates,” she told SiliconRepublic.com in a recent interview. As a result, Watson has dedicated her work to developing standards and certifications to improve the transparency and auditability of AI systems.

Data and transparency

According to UNESCO, “the ethical deployment of AI systems depends on their transparency and explainability”.

As Griva noted: “The rapid proliferation of AI models has, at times, led to insufficient scrutiny of the data used to train them. Often data is used or scraped from the internet to feed these AI systems, resulting in various biases, with selection bias being particularly prominent due to the use of incorrect or incomplete datasets.”

Griva gave the example of individuals who attempted to upload their passport photo to a site and received the notification that their photo did not meet the criteria, which suggested “unfair treatment of specific ethnic groups”.

She also mentioned the example of a 2016 Tesla self-driving car that failed to detect the difference between a white tractor-trailer and a bright sky, causing a crash. Griva said this shows “how selection bias can be a big problem in real-world scenarios”.

There have been many examples of issues with malfunctioning technology in self-driving cars recently. Just last week, the CEO of General Motors-owned Cruise, a California-based autonomous car company, resigned after several dangerous incidents involving their self-driving cars.

AI is a ‘game-changer’

Though recognising that there are concerns with how people use and abuse AI, Griva was keen to emphasise the value of the technology as a “game-changer”.

“I think that it’s important to emphasise its numerous positives and we should primarily concentrate on the benefits while remaining mindful of potential worst-case scenarios and how to proactively prevent them.”

In this way, Griva espouses a view similar to Prof Barry O’Sullivan, a leading AI researcher who works at University College Cork (UCC). O’Sullivan thinks that there has been too much emphasis on the so-called existential threat of AI which he said is “at best, irresponsible”, and distracts from genuine issues such as bias, transparency and accountability of these systems.

In an interview with SiliconRepublic.com, O’Sullivan argued the real risks we should be concerned about are “climate change, global poverty and the protection of human rights”.

“In this respect, statements about the existential threats to humanity posed by AI are tone deaf,” he said.

Griva listed some of what she sees as the many positive aspects of this technology, which includes the ability to complete certain tasks much faster and with greater accuracy, leading to increased productivity and cost savings across industries.

“GAI has the potential to assist people with disabilities for instance, by providing tools such as text-to-speech technology, or by offering support to people with dyslexia,” she said.

“It can be utilised by companies to easily personalise their services and offer better customer experiences in several domains, ranging from retail to finance and healthcare.”

In short, Griva sees AI as “a substantial technological leap which can revolutionise many aspects of our lives”. She, for one, is ready to jump right in.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Rebecca Graham is production editor at Silicon Republic

editorial@siliconrepublic.com