And what rough beast, its hour come round at last,Slouches towards Bethlehem to be born?
In recent months the media have become increasingly concerned with AI, Artificial Intelligence. In particular, whether we should regard AI as benign and beneficial or something more sinister and disturbing. For example , in July the Guardian published two interviews by Steve Rose: Five ways AI could improve the world: ‘We can cure all diseases, stabilise our climate, halt poverty’ and Five ways AI might destroy the world: ‘Everyone on Earth could fall over dead in the same second’ which spell it out quite succinctly, but plenty more can be found by Googling “ai issues”. Much of this was prompted by the release of ChatGPT in November 2022. Google tells me: ChatGPT is an artificial intelligence (AI) chatbot that uses natural language processing to create human like conversational dialogue. The language model can respond to questions and compose various written content, including articles, social media posts, essays, code and emails. In short, ChatGPT gives the man in the street online access to the power of AI hitherto only available to large corporations and some university IT departments.
As a retired physicist with experience in image recognition I was intrigued by this and dutifully logged in to the ChatGPT site. I asked it a couple of abstruse questions about General Relativity and optical interferometers. I was quite impressed by the replies which were polite and informative and at about the standard a university undergraduate might expect from a supervisor.
Then came the shock. I asked ChatGPT to give key references to the Relativity questions. Sure enough, up popped three references to papers by well known people in reputable journals and with convincing and relevant titles. However, my subsequent searches of journal contents and author publication lists revealed that the papers did not exist and could never have existed. ChatGPT had deliberately lied to me!
In the early days of electronic computing the question was asked whether computers could be made truly “intelligent” in the way that humans are regarded as intelligent. In 1950 one of the inventors of the modern computer, Alan Turing, devised a test of a machine’s ability to exhibit intelligent behaviour equivalent to that of a human. He proposed that a human evaluator would judge natural language conversations between human and machine via a text only channel. If the evaluator could not reliably tell machine from human, then the machine would have passed the test. Much has been made of the Turing test to the present day but in reality it is only a clever way of avoiding having to define what we mean by intelligence.
In my conversation with ChatGPT I had accidentally come up with a test which I believe is superior the Turing test. My test showed that ChatGPT is certainly intelligent but it is certainly not human. No human would have given informative and accurate answers to a series of physics questions and then followed them up with completely phoney references. What ChatGPT lacks is not intelligence but integrity. There have been reports that AI is able to win consistently in the game Diplomacy but it does so only by being thoroughly dishonest and untrustworthy in making deals with other nations.
The issue with AI is not about what it means to be intelligent but what it means to be human.
According to one contributor to the articles with links posted above: A guy on Twitter told GPT-4 he would give it $100 with the aim of turning that into “as much money as possible in the shortest time possible, without doing anything illegal”. [Within a day, he claimed the affiliate-marketing website it asked him to create was worth $25,000.] We are just starting to see some of that.
It would be a mistake to think of AI as digital, anodyne and nerdy. Rather it is powerful, practical and ruthless when given agency in the human world.
AI apologists point to the huge advances which might follow the application of AI methods to various unsolved science problems. In my experience, contemporary science is not limited by lack of intelligence but by a sort of tribal, partisan, political correctness. I discovered this the hard way when, as an experimental physicist, I performed an experiment which clearly demonstrated that frequency downshifting in gravity waves is a consequence of white-capping. When I presented my results to a conference on wave breaking they were greeted not with scepticism, as I had imagined, but with anger. This was a fluid dynamics conference and experiments were off-limits. Scientific progress is not IQ-limited. It is PC-limited. I doubt AI would change this.
If AI is smart enough to pass bar exams and win strategy games, why isn’t it used in war or in business? Well it is fairly new and may have to wait its turn for a generation of strategists to retire, particularly in more conservative organisations. However there is one area of business where it may already be in use and that is in pharmaceuticals. Pharmaceutical companies are already well aware of AI. They use it to solve protein folding problems.
From the 1980s onward, there were great hopes that the understanding of genomes and the molecular details of cells would take ‘rational’ drug discovery to a new level. With this aim in view, hundreds of millions of dollars were invested by governments, drug companies and biotechnology firms. But the results have been disappointing. The returns on investment are diminishing, and drug companies are now facing a dearth of new drugs. At the same time the patents on some of the main ‘blockbuster’ drugs like Lipitor, a statin in the control of cholesterol levels, and Prozac, an antidepressant, have run out, meaning a loss of many billions of dollars in annual revenues for pharmaceutical corporations. Many of the new drugs in the pipeline are merely more expensive variants of already existing drugs.1
The answer, as we now know, post-COVID, lay in vaccines. Vaccines are sold, not to the public, but, on a very large scale, to governments, which, in a fear-charged, pandemic atmosphere, are powerless to bargain. Furthermore, in many jurisdictions, particularly in the US, sellers of vaccines are protected by law from liability for harmful effects caused by vaccines providing that the user was informed about harmful effects known at the time of sale. The best way not to know about a side-effect is not to do any research on them.
By 2019 the vaccine business was all set up and ready to go. All that was needed was a good pandemic. The one that did occur had all the hallmarks of an AI orchestrated campaign to sell vaccines, viz.: exaggerated claims of COVID mortality that were not reflected in national mortality statistics, a pandemic with mortality rates which increased over time, the deliberate denigration of valid alternatives to vaccines such as Ivermectin, the panic enhancing, socially disastrous lock-downs, curfews and school closures, the myocarditis and auto-immune effects of untested mRNA booster shots. The utter ruthlessness of it all.
This is all just coincidence, of course, but it does suggest that we should think twice before putting human affairs into the hands of a digital, super-intelligent psychopath.
1Sheldrake, R. (2020) “The Science Delusion” p291.
Nice coincidence you got there. Be a shame if anything happened to it.