THE possibilities of Artificial Intelligence (AI) and fears of AI are two of the hottest topics of conversation in Singapore, and most of the rest of the world, right now.
Google data shows that there are more than 65,000 searches per month in Singapore for AI, Generative AI, some of the more popular platforms such as Open AI and Midjourney, and similar topics.
Clearly AI offers exciting possibilities in the areas of Science, Research, Healthcare and mitigation of the impacts of Climate Change. Perhaps less excitingly, but just as importantly, the positive impacts in areas such as logistics, transportation and supply chain management are also easy to imagine.
Old Fears, New Unknowns
But with these possibilities come fears. Fears of bad actors using AI to breach security systems or interfere in the political process, fears of unethical use, even fears of AI bringing about the end of human civilisation.
In May of this year, The Future Of Life Institute published an open letter, signed by Elon Musk, Steve Wozniak and other tech luminaries, calling for a six-month moratorium on the development of AI based technologies whilst the risks are assessed. Their pleas fell on deaf ears.
Many of the concerns are focused on AI technologies becoming, as the Future Of Life letter puts it, “human competitive”.
In short, is an AI going to do you out of a job?
Fears of technology taking human jobs are of course nothing new; 19th century Luddites rightly feared that new machinery would replace their highly skilled textiles jobs. Many have warned about the impact of automation, algorithms and machine learning technologies; books such as Richard and Daniel Susskind’s The Future Of The Professions, Eric Brynjolfsonn’s The Second Machine Age and of course Ray Kurzweil’s The Singularity Is Near have been very influential over the last 10 to 15 years.
And it’s certainly arguable that the digital revolution is yet to produce the jobs and transformation of work that the industrial and computer revolutions did. None the less, the current wave of generative AI technologies, the best known being ChatGPT, have caused even more existential angst than might have been expected.
Are these fears justified? I’ve been working with ChatGPT4 every day for 6 months now, and I think that at some level these fears are justified, especially in lower level, more process oriented roles. But I believe that ChatGPT4 can also be a great boon to creativity, productivity and inspiration.
Helpful Or Helpless?
To decide whether large language models (LLMs) are a friend or a fiend, it’s important to understand what they actually are and what they are not.
- LLMs are text-based models, designed for human language based tasks. They are trained on vast amounts of text data, and essentially attempt to predict the next word in a sequence, based on the interrelationships of all words in the trainings set. So, they are not super computers, or the repository of all human knowledge and it’s unreasonable to expect them to be. Because of this, don’t ask LLMs to do complex mathematical tasks, for example. LLMs have no calculation capacity, and will provide answers based on numerical relationships they have observed in the training set. This means for simple calculations, they will probably be right. But you shouldn’t rely on it.
- LLMs are not Search Engines. There are some overlaps between search engine and LLM capacities, and indeed search engines such as Bing are experimenting with incorporating LLM capabilities. In general, Search Engines point the user to web pages that cover the information being searched, whereas LLMs try to define an answer to the query itself. Search Engines index information constantly, whereas LLMs are based on a defined data set. In short, LLMs cannot tell you anything about current events; don’t expect them to tell you where the best chicken rice stall is.
- If it’s not in the public domain, LLMs don’t know. Of course individual businesses or government authorities could, and no doubt are, training their own LLMs based on data they have access to. But for public AIs, if it hasn’t been published, they have no knowledge of it.
- The more data, the more reliable the conclusion. In areas where a great deal of quality data is publicly available, LLMs can be trusted to provide reliable answers. In more niche areas, the conclusion is less so. An interesting illustration of this is in music vs literature; there are many detailed online music databases, maintained by dedicated music fans. Ask ChatGPT4 to name the fourth Rolling Stones album and it will correctly nominate Aftermath and talk knowledgeably about the different UK and US editions. There are many Literature databases too — but these tend to exclude “spoilers” so asking ChatGPT4 for the plots of works of literature can be unreliable. A famous example is that, until recently, ChatGPT4 was unable to correctly identify the murderer in Murder on the Orient Express, possibly the most famous work of detective fiction in English. That’s been corrected now, but many other examples from the golden age of detective fiction remain incorrect.
How to use ChatGPT?
Here are a few main ways.
- To organise my thoughts and structure documents. In my work as a consultant, I need to write proposals, reports and templates. Rather than spend time, which clients are paying for, constructing the right frameworks, ChatGPT4, if properly prompted, will give me a number of options to choose from. This means I can spend my time on more creative jobs.
- To do basic coding. I need to create formulas in Excel, data visualisations in Looker Studio and other formats; ChatGPT4 is great at correcting my many errors and creating more elegant solutions. It’s great for e-commerce coding too.
- To create art, graphics, logos etc., I use midjourney to create illustrations and “photo like” images. Once you understand how to ask the question properly, midjourney can create original, sometimes startling, illustrations in minutes. I use Canva to create original designs, logos and graphics.
- Talking Head videos. I use Synthesia to create AI based Talking Head videos, mainly for training and learning initiatives.
- Simple marketing. For example mailouts, credentials documents etc., and ChatGPT4 is excellent at organising your CV too.
All of these increase my productivity and help me get past various blocks in the flow of my work. But it’s also easy to envisage the threat to some occupations.
- Customer Service: AIs, particularly if trained on an organisation’s own customer interaction data, should be able to address the vast majority of customer service issues. Not all of course; there will always be a need for humans to think outside the box.
- For the last few years, we’ve been encouraging parents to teach their children to code. Coding will always be a useful skill of course, but no longer such a valuable one. AIs will be able to do all but the most creative of coding tasks.
- Art and Design. In the professions, graphic designers and similar professions are going to need to work hard to demonstrate better value than an AI; only the very creative will not be stretched.
- Marketing, Digital Marketing, Copywriting etc. AIs can create lucid, digestible content, quickly, efficiently and cheaply. What they can’t do, at this stage, is have intelligent, or original thought.
- The creation and dissemination of content, in the most digestible and learning enhancing way.
In short, if your work can be mainly described as process-related, then an AI is coming to replace that, if not now, then soon.
But if there are elements of creativity in your work, then AI can be harnassed to increase that creativity. And also to level up.
A large-scale study of consultants at BCG conducted by Harvard Business School Technology & Operations Management Unit shows that in a split sample with half allowed to use AI, and a control sample not allowed to, those consultants using AI were 12.2% more productive and 25.1% faster in completing those tasks. But rarely did AI expand the frontiers of capability.
So, friend or fiend? Like most significant innovations, it’s a bit of both. The whole of human history has been characterised by the attempt to make process easier and more efficient. And there have always been jobs that have been lost in that process, and new opportunities that have emerged.
Guy Hearn will be a panellist on the next WED WEB CHAT — A.I. — Friend Or Fiend on 4 October 2023, from 12:45-1:30pm (SGT). Other panellists in the session include Michal Krystyanczuk, co-founder The Data Strategy and Prof Pierre Alquier of the Department of Information Systems, Decision Science and Statistics, ESSEC Business School Asia Pacific.
Join the free Zoom session via THIS LINK.