Om February 14, Kevin Roose, the New York Times Tech columnist, had a two-hour chat with Bing, Microsoft’s ChatGPT-enhanced search engine. He emerged from the experience as a seemingly changed person because the chatbot had told him, among other things, that he would like to be human, harbor destructive desires and be in love with him.
The transcript of the conversation, along with Roose’s appearance in the newspaper The daily Podcast, immediately fueled the moral panic already raging over the impact of large language models (LLMs) like GPT-3.5 (which appears to underpin Bing) and other “generative AI” tools now loose in the world. These are variously viewed as chronically untrustworthy artifacts, examples of out-of-control technology, or progenitors of what is known as artificial general intelligence (AGI) – that is, human-scale intelligence – and thus an existential threat to humanity.
Accompanying this hysteria is a new gold rush as venture capitalists and other investors scramble to get in on the action. It seems that all that money is burning holes in very deep pockets. Fortunately, this has its comic sides. For example, it suggests that chatbots and LLMs have replaced crypto and Web 3.0 as the next big thing, again confirming that the tech industry as a whole has a newt’s attention span.
Strangest of all, however, the chaos was ignited by what one of its lead researchers called “stochastic parrots” — by which she means that LLM-powered chatbots are machines that continually predict which word is statistically most likely to follow the previous one . And this is not black magic, but a well-understood computational process, clearly described by Prof. Murray Shanahan and elegantly dissected by computer scientist Stephen Wolfram.
How can we understand all this madness? A good start is to wean people from their incurable desire to interpret machines in anthropocentric ways. At least since Joe Weizenbaum’s Eliza, people who interact with chatbots seem to want to humanize the computer. This was absurd with Eliza – which was simply running a script written by its creator – so it’s perhaps understandable that people now interacting with ChatGPT – which can apparently respond intelligently to human input – should fall into the same trap. But it’s still stupid.
Constant renaming of LLMs to “KI” doesn’t help either. These machines are certainly artificial, but to consider them “intelligent” seems to me to require a rather impoverished notion of intelligence. However, some observers, such as philosopher Benjamin Bratton and computer scientist Blaise Agüera y Arcas, are less dismissive. “It’s possible,” they concede, “that these types of AIs are ‘intelligent’ – and in some ways even ‘conscious’ – depending on how those terms are defined,” but “none of those terms can be very useful, when they are strongly anthropocentrically defined”. They argue that we should distinguish sentience from intelligence and consciousness and that “the true lesson for the philosophy of AI is that reality has overtaken the language available to analyze what is already available. A more precise vocabulary is essential.”
It is. For the time being, however, we are stuck with the hysteria. A year is an awfully long time in this industry. Remember that just two years ago the next big things should be crypto/web 3.0 and quantum computing. The former has collapsed under the weight of its own absurdity, while the latter, like nuclear fusion, is still just over the horizon.
With chatbots and LLMs, the most likely outcome is that they will eventually be viewed as significant augmentations of human capabilities (spreadsheets on steroids, as one cynical colleague put it). When that happens, the main beneficiaries (as in all previous gold rushes) will be the providers of the pickaxes and shovels, which in this case are the cloud computing resources required by LLM technology and owned by large corporations.
Given that, isn’t it interesting that the only thing nobody is currently talking about is the environmental impact of the massive amount of computers required to train and operate LLMs? A world dependent on them might be good for business, but it would certainly be bad for the planet. Perhaps Sam Altman, the CEO of OpenAI, the company that founded ChatGPT, had exactly that in mind when he remarked that “AI will most likely lead to the end of the world, but in the meantime there will be great companies”.
what i read
pain profiles
Social Media Is a Major Cause of the Mental Illness Epidemic in Teen Girls is a stunning study by psychologist Jonathan Haidt.
crowd favourite
What the Poet, Playboy and Prophet of Bubbles Can Still Teach us is a beautiful essay by Tim Harford on, among other things, the madness of the masses.
Tech Royalties
What Mary, Queen of Scots can teach today’s computer security geeks is a fascinating contribution by Rupert Goodwins to the Register.