Skip to content

Beyond words?

The possibilities, limitation, and risks of large language models

Natural language processing (NLP) 

Human language is an intricate means of conveying complex and abstract thoughts. A computer program capable of ‘understanding’ spoken and written language, including intent and sentiments, is thus a powerful tool that allows us to combine the information-carrying efficiency of human language with the data-processing abilities of computers. The development of machine-based language models is a field of artificial intelligence (AI) known as natural language processing (NLP). Blending statistical, machine learning, and deep learning models with computational linguistics, NLP creates language models capable of predicting the likelihood of a specific word to appear in a particular context. Being probabilistic, their output can be unpredictable and creative, allowing us to explore new ways of expressing ideas while saving time. 


Large language models (LLMs)

Large language models (LLMs) are models that have been trained on massive amounts of text, some of the most well-known examples being so-called Generative Pre-trained Transformers (GPTs). GPT-3 is a general-purpose LLM from OpenAI that has been trained on hundreds of billions of words from different types of sources to generate text based on user prompts. ChatGPT is a conversational AI model that has been specifically designed to generate text in a conversational style. While based on the GPT-3 architecture, ChatGPT is unique in that it has been trained using Reinforcement Learning from Human Feedback (RLHF) – a technique that reduces the likelihood of harmful, untruthful, or biased output. GPT-4, released to selected users by OpenAI in March 2023 and rumoured to have trillions of parameters, is a multimodal extension of ChatGPT that can work with both text and images. 


Limitations and risks of LLMs and GPTs 

Although LLMs are good at discovering correlations and patterns in natural language and use that information to assist humans in various tasks, they are not self-aware, have no appreciation of cause and effect, have no sensory experiences of the world, and have limited ability to simulate human reasoning. As they become more powerful and widely used, it becomes increasingly important to be aware of their limitations and risks, which range from practical challenges to concerns that LLMs could pose existential threats to our civilization. For example, GPTs lack context beyond the data on which they have been trained; they have no long-term memory for continuous learning; they have prompt limits; and, like most deep learning models, they lack explainability and interpretability. Furthermore, LLMs may deliberately or inadvertently be used in ways which produce inaccurate, biased, fraudulent, or offensive content, or which leak sensitive information. On a societal level, they will impact the workplace; they may create unintended dependencies and vulnerabilities; and their resource utilization will have environmental and economic footprints that are hard to predict. 


Risk management 

To mitigate such risks, technical advances are necessary, and there have already been some developments in this direction. For example, fine-tuning of existing LLMs can be used to improve performance in specific subject domains. Multimodality, where models for example learn from speech, images, and video in addition to text, also holds the promise of providing AI with better ‘world models’ that make them more grounded in the physical world. Watermarking and AI classifiers can help fight mimicry and improve accuracy, and the coupling of language models to other systems, such as search engines and computational systems, can create synergies and higher-quality output. There is also a lot of ongoing work to detect and flag content produced by AI, which may help to combat plagiarism, spam, and fake news. 

In addition to technical advances, there is a need to educate humans about effective and safe interaction with LLMs. This includes taking advantage of ‘prompt engineering’ – the process of crafting high-quality user input to get more reliable and useful answers – and refraining from sharing personal, proprietary, and business-sensitive information. As we become more used to interacting with AI, we will also be more attuned to AI systems’ quirks and less vulnerable to their potential harms. 


Read more