Sage advice: A Stoical view of generative AI

27 Jun 2023

Roman emperor Marcus Aurelius, generated with AI. Image: © Ian/

Citi’s Ciaran Fennessy takes a philosophical view of generative AI to see if the learnings of the past can be applied to the emerging tech of today.

Generative AI, an area of artificial intelligence that focuses on creating new content by models trained on large datasets, has been available for a number of years. However, it is only recently, since the release of ChatGPT 3.0 in November 2022, that the interest in generative AI has experienced a Cambrian explosion, with an unprecedented volume of interest across all main stream media outlets.

The primary forces driving the proliferation of interest has been the availability of large language models – advanced AI models trained on vast amount of text data to create human like responses – such as ChatGPT and the consumption of these tools by many. It has been well documented how ChatGPT set the record for the fastest growing user base of any technology tool.

How can the maxims from the Stoics, developed more than 2,000 years ago, be applied to generative AI? What can their wisdom teach us about using and applying generative AI?

‘First say to yourself what you would be; and then do what you have to do’

The term ‘artificial intelligence’ was created by John McCarthy, who organised a research conference in Dartmouth Conference in 1956, entitled ‘Dartmouth Summer Research Project on Artificial Intelligence’.

The conference was based on “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”.

Within 10 years of the Dartmouth Conference, the first chatbot was created by  MIT professor Joseph Weizenbaum between 1964 and 1966. Called ELIZA, the program was designed in a way that mimicked human conversation. It simulated a psychotherapist whereby users addressed questions on a computer and then ELIZA paired them to a list of possible scripted responses.

Through the AI winters of the 70s and 80s, where the initial optimism of what AI could achieve went unrealised and US research investments were redirected to the development of ARPANET, which ultimately led to the internet, pockets of research continued on AI.

In Carnegie Mellow University, in 1984 the NavLab driverless car project was commenced with the objective of using computer vision to create autonomous vehicle driving.

Fast forwarding to today, where generative AI is being discussed widely within organisations, through the lens of Epictetus, ‘First say to yourself what you would be; and then do what you have to do’ – can generative AI support the conjecture that every aspect of learning or any other feature of intelligence been so precisely described that a machine can be made to simulate it?

‘Most of what we say and do is not essential.  If you eliminate it, you’ll have more time, and more tranquillity’

This Stoic maxim reminds us to review the tasks we do and how essential the tasks are and thereby creating more time for ‘tranquillity’.  Applying this to the context of generative AI, having a questioning and open mind to how the technology can be applied, can this lead to creating more time for other tasks and tranquillity?

While many may view large language models as tools available over the internet, organisations can apply large language models within their own environments and train these models on their own internal datasets.

Currently, there are a number of models available to support this and the number keeps growing.  Hugging Face has created a large language model leadership boards, where models are assessed against four different criteria on how they perform.

The criteria cover zero-shot learning – a question put to the model with no guidance provided on how the model should answer it; and then various 5/10/25-shot learning whereby 5/10/25 examples are provided to the model to guide its answer.

Considering the advice of Marcus Aurelius through the lens of the 21st century, while the work we do is essential (otherwise why would you have that role?) can the introduction of this technology support teams in creating content?

While the ‘tranquillity’ that he refers to comes from a bygone age, can large language models be leveraged to provide support to resources?

‘Be not swept off your feet by the vividness of the impression, but say: ‘Wait for me little impression: allow me to see who you are, and what you are an impression of; allow me to put you to the test’

How many of us have been swept off our feet by the vividness of the impression since the launch of ChatGPT 3.0?  How many of us have reflected on how large language models can support how we currently work? However, have we taken the opportunity to put this technology to the test?

As noted earlier, the large language model leadership boards show the top best performing models currently. What is interesting when you look at this data in detail is that the highest-ranking model (at the time of print) has an average accuracy of 63pc against the four different criteria models are measured against.

Some of this inaccuracy can be attributed to one of the challenges with generative AI – ‘hallucinations’. Hallucinations occur when the outputs from the models seem credible, however, they are not 100pc correct. Therefore, as Epictetus noted, ‘Allow me to put you to the test’ becomes a critical mitigation in the use of this technology, which can be supported by having the human in the loop.

Many people consider large language models as those that support the creation of new verbal and art content. However, there are large language models that can support the writing of software code also. These models are trained on large amounts of publicly available code on the internet.

A research paper from Stanford University published in December 2022 analysed the quality of the code produced by large language models. The findings from this report stated that “participants with access to an AI assistant often produced more security vulnerabilities than those without access … participants [with] access to an AI assistant were more likely to believe that they wrote secure code than those without access to the AI assistant … those who trusted the AI less … were more likely to provide secure code”.

Reflecting on Epictetus statement, we all need to take time and appreciate how generative AI can be applied, but also be very cognisant on understanding the impacts it might have and we need to take time to “allow me to see who you are” and to “put you to the test”.

Although it is more than 2,000 years since the Stoics, their maxims still apply to the use and application of generative AI. The philosophies of Stoics can be a guiding compass on how organisations implement this technology and how consumers use it.

As Marcus Aurelius said: ‘Keep constantly in mind in how many things you yourself have witnessed changes already. The universe is change; life is understanding’.

By Ciaran Fennessy

Ciaran leads the global funds services strategy and transformation team within Citi. He also lectures in AI.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.