AI adviser speaks the language of the machine

3 Jul 2024

Image: © Good Studio/Stock.adobe.com

‘AI should remain subservient to human needs,’ says AI expert Dr Maria Aretoulaki.

The possibilities and challenges, the ethical and moral obligations, and the legal and security risks of artificial intelligence (AI) are topics that Dr Maria Aretoulaki thinks about and worries about “on a daily basis”.

Having worked in AI for more than 30 years, Aretoulaki now finds herself in the role of AI policy adviser. “I wouldn’t have liked to be just the recipient of other people’s AI systems,” she tells SiliconRepublic.com. “I am excited but also terrified to be part of the conversation and the developments themselves.”

A black-and-white headshot of Dr Maria Aretoulaki. She wears glasses and has long dark hair and smiles at the camera in front of a brick wall.

Image: Maria Aretoulaki

Aretoulaki is the conversational and generative AI design lead at US-based GlobalLogic, a digital engineering consultancy firm that is part of the Hitachi Group. She provides expertise and advice for designing and implementing conversational AI assistants and chatbots and is a trusted adviser to the global leadership team on new technologies. She is also a member of the Hitachi AI Policy Committee and gives regular talks about “AI’s present and future and what it means for the present and future of us humans”, she says.

The present and future of AI

Having witnessed and been “fascinated by” the early days of PCs (“the ones that still had a black screen on which you would write commands in green letters – no pleasant or intuitive user interface!”) and natural language processing, Aretoulaki is able to see the long view.

“I have always been able to see and predict issues arising from the use of AI systems that don’t work as intended, especially when they are human-like or – even worse – when they pretend to be human in the first place.”

And she is clearly concerned about the world that we are creating for future generations.

“The existential threat from AI is real, and anyone who spurns such concerns is either naive or overoptimistic. We have reached a point in AI development that has never been reached before, and we have no idea where it could lead tomorrow, let alone in 40 years,” she says.

“Concerns about individual freedom, privacy, respect and safety cannot be brushed over. Concerns about national security and AI-emboldened bad actors could mean nuclear annihilation or, at least, tainted election campaigns and malfunctioning democracies.”

Aretoulaki says one of the biggest risks she has ever taken was having a child at 46. “Perhaps the most physical risk I have ever taken,” she admits. “It was a lovely, uneventful, full-term pregnancy, and I was lucky to have had full support from his dad from the start. I was also super lucky that my son turned out to be not only healthy but perfect. (But then I would say that, wouldn’t I?).

“Still, I never really thought about the consequences of being an old mum, which has bitten me since.

“I never seriously considered the scary thought of me potentially leaving this world while he is just starting up living life independently with no one to turn to for advice and encouragement.

“He’s definitely worth it, though!”

Aretoulaki’s work to create a better future for her son involves advocating for the explainable and responsible use of AI. Explainability means that AI systems are transparent, trustworthy and predictable, she says.

“Responsibility means that AI should remain subservient to human needs, goals, priorities and values and, hence, should not inadvertently propagate biases and discrimination, let alone inaccuracies and lies.”

The only way to achieve this, Aretoulaki believes, is by always keeping humans in the loop –“someone to check the legitimacy of the AI use case in the first place, but also how the AI is trained and with what type of data; how complete, accurate or representative this dataset is; how you model processes and procedures and the world in that dataset; and how accurate, biased, offensive or unethical the AI output is.”

Speaking the same language

A major issue with AI, and particularly the new wave of generative AI, is the question of, as Aretoulaki mentioned above, authenticity. AI systems are so sophisticated that it’s hard to tell fake from real content. Aretoulaki earned a degree in linguistics and literature before undertaking a conversion master’s in machine translation and completing a PhD in computational linguistics.

“I have always worked on marrying linguistics with computer science and it’s still very relevant and necessary now that we have AI systems that make up facts, lie, pretend or try to deceive – just like humans do – through language,” she says.

“Linguistic expertise is not just valuable, it’s quintessential in the era of AI systems that work around the manipulation of language, meaning, user intent, goals and tactics.”

Aretoulaki welcomes the regulation of AI systems and sees the EU as “the world leader in AI policy”. The GDPR and other data and privacy regulations were its first steps, she says, and now it has passed the EU AI Act, which aims to protect the rights of citizens against unsafe, misleading uses of AI.

“The EU takes a risk-based approach, classifying risk to life, property, human rights and national security. [It] will certainly influence other countries and geographies to come up with similar regulations and laws.”

As for companies, they “appear to be doing their best to deliver responsible AI”, she observes, though with the caveat that they don’t have much choice given the global scrutiny of the technology. “They need to appear to act or at least care about the challenges and risks,” she says.

“Nevertheless, at the end of the day, [companies] are still primarily accountable to their shareholders rather than society, so I don’t expect much to come out of so-called ‘voluntary’ compliance with AI regulations and laws. Governments worldwide need to pass specific laws that every company deploying or delivering AI systems must comply with.”

Acceptable risk

Though she advises caution, Aretoulaki is also clearly passionate about AI and innovations in the sector. She is particularly excited about the possibilities for disease diagnosis, and drug discovery and development. “AI has generated a plethora of new opportunities … the impact on healthcare and wellbeing is evident on a global scale.”

The secret to Aretoulaki’s success may be that she has never been afraid to take risks, whether that’s becoming a mom later in life, switching academic disciplines to pursue her interests, or running her own company for 14 years, she approaches life with a spirit of adventure.

Her experiences have taught her that it’s important to “keep learning”.

“Never fear cross-skilling and upskilling, reading and learning new things, disciplines, areas of expertise, skills and perspectives. It can’t harm but will broaden your horizons and open up new possibilities,” she says.

And equally important is to “dare to ask”.

“Ask for help, ask for a phone number, an email address, a meeting, ask for advice, ask for a job, ask for a promotion, ask for a pay rise. What’s the worst that can happen? They may say no, but they will know that you want more, and you will probably reach out to more people to get it. Plus, if you don’t ask for yourself, no one else will do it for you!”

Find out how emerging tech trends are transforming tomorrow with our new podcast, Future Human: The Series. Listen now on Spotify, on Apple or wherever you get your podcasts.

Rebecca Graham is production editor at Silicon Republic

editorial@siliconrepublic.com