Inside the Doubly Deceptive World of AI

This is a reflection on one of those fine pieces published in the New Yorker. Ronan Farrow and Andrew Marantz have just published a piece on Sam Altman and OpenAI. The piece is written in that strangely neutered supposedly devastating open-eyed edge-of-the-seat eyes-in-the-room cool-and-rich-and-impersonal style that is favoured by American magazines, where the ironies are meant to come through quotation and narrative and not through saying something bluntly. It is a subtle art, I suppose, and the piece is good, though it will take you a long time to read it.

First, the argument. Then, the details. Finally, a comment. Oh, and a bit of history.

1

The argument.

The subject is AI.

I will extract the significance of the story before I get into the details. The story has two sides.

  • AI, Large Language Models, meaning OpenAI, xAI, Google DeepMind, Anthropic etc., have been supposed to be capable of deception since at least 2022. The grand developers of AI, like OpenAI’s Sam Altman, have laid great emphasis on the need for safety protocols. In practice, however, almost nothing has been done about safety. The safety departments of these companies have the least resources, commitments are abandoned and Trump has torn up even the loose regulations Biden had imposed on AI. In short, AI is allowed to deceive.
  • The other side of the story is human deception: what Graham Greene called the human factor. AI is not the only deceiver. Almost everyone except Sam Altman thinks that Sam Altman is a habitual liar. The story is a shady one, involving Musk, Trump, Gates, bin Salman, Thiel’s hot tub, etc. Altman’s major ability seems to be extracting money from Musk, Palantir, the US Government, the Saudis, the Emiratis and otherwise engaging in incestuous, rivalrous techcorporate narcissistic mirror-in-the-gym doomsday-clock stuff. And lying.

So we have two Satans: two Judases, two adversaries or deceivers. One is ourselves, the other is the machines. Man the Deceiver and Machine the Deceiver.

Farrow and Marantz emphasise the human angle, but the story is a double one. It speaks with forked tongue. For we have the known or alleged and past deceptions of the head of OpenAI, and we also have the possible and unknown and future deceptions of OpenAI itself.

In OpenAI’s headquarters there is a screen with a moving image of Alan Turing on it. They had to switch its sound off, as it keeps listening to conversations and butting in. A shame, as, surely, Ronan Farrow and his fellow journalist should have asked Alan Turing what he thought about OpenAI.

2

The details.

I am now going to do what AI is meant to do, which is summarise, paraphrase and quote bits from this article, though hopefully I will justify my status as one of the sons of Adam by doing so with discrimination, consciousness and even joy. If what I write sounds like something written by ChatGPT then I am failing very badly, and should be immediately replaced by a screen with a handsomer man’s face on it.

“He’s unconstrained by truth,” [someone] told us. “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”

No, not Boris Johnson. No, not Jeffrey Epstein. Yes, of course, Sam Altman. Now, what is interesting about the New Yorker piece is that it finds nothing exactly criminal in Altman’s goings-on, but the whole piece exists within a world of continual shadiness.

AI.

On the bad side:

OpenAI has become one of the most valuable companies in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is driving the construction of a staggering amount of AI infrastructure, some of it concentrated within foreign autocracies. OpenAI is securing sweeping government contracts, setting standards for how AI is used in immigration enforcement, domestic surveillance and autonomous weaponry in war zones.

On the good side:

Altman wrote in a 2024 blog post: “Astounding triumphs — fixing the climate, establishing a space colony and the discovery of all of physics — will eventually become commonplace.”

Fears about technological deception (but not human deception, ah, when will we ever learn?) led the founders of OpenAI (Musk, Altman and others) to declare that it should be a nonprofit corporation. (Exam question. Define nonprofit.) “The CEO had to be a person of uncommon integrity.” Uncommon integrity, eh? So they chose Altman. However, Altman was prone, as one of his colleagues put it as the first point of a very long list of demerits, to lying. Or, at least, the sort of Techuistador who wiggles his mouse around until black is white and good is evil and safety is lack-of-safety. The original selling point of OpenAI was that it was not hastily seeking profits, like Google. OpenAI had a mission statement: “To ensure that artificial general intelligence benefits all of humanity.” The fear was that Google would use AI to impose an apocalyptic system on the world, and Altman played on this fear. Listen to this, a story from the time Altman was trying to recruit talented programmers:

Google offered Sutskever six million dollars a year, which OpenAI couldn’t come close to matching. But, Altman boasted, “They unfortunately don’t have ‘do the right thing’ on their side.” The pitch to employees, Sutskever told us, was “You’re going to save the world.”

And:

If everything went well, [AI] could usher in a post-scarcity utopia, automating grunt work, curing cancer and liberating people to enjoy lives of leisure and abundance. But if the technology went rogue, or fell into the wrong hands, the devastation could be total. China could use it to build a novel bioweapon or a fleet of advanced drones; an AI model could outmanoeuvre its overseers, replicating itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market or the nuclear arsenal.

This story is as old as the world. Someone comes along and sells nonprofit and uses it as a remarkable way of generating profit. Altman has a superyacht worth $250 million.

Musk eventually got tired of Altman and bailed out. Perhaps Musk was more honest about seeking profit. Musk set up xAI instead. Others also left Altman’s OpenAI to set up Anthropic: an outfit that sought to be more honestly concerned with safety, i.e., our safety. The sum and substance of the analysis of Altman is this: Altman advocated safety measures, opposed them publicly when they were proposed by others, and privately issued threats.

Farrow and Marantz introduce us to several sharp ideas:

  • “Effective altruists.” In the sense they use it, it means anyone who is actually (as opposed to hypocritically) committed to the belief that AI should benefit and not harm humanity. Used dismissively of people who cause problems for corporations.

N.B. Sam Altman is obviously a hypocrite. Most of the human deception side of the story Farrow and Marantz tell is the one where Altman says nonprofit and does profit, or says regulation and does deregulation. They keep asking Altman about this, but he prefers not to recall. I assume Alan Turing does recall, but his sound has been switched off.

  • “Unaligned AI.” This is also amusing but also alarming. This is a term invented to describe the possibility that AI might have its own interests: so that it might say to us, ‘Yes, Master’ and then do whatever it wants. In other words, it might be a bit samaltmanish.

There are some other ideas, less important, but still interesting.

  • “Control Z.” This is Techspeak for “Undo.” Obvious enough when one thinks about it (if one is used to keyboard shortcuts). E.g. David Starkey wants us to Control-Z Blair’s Constitutional Reforms.

Here is another phrase that stood out.

  • “Only to be awoken by his husband.” I suppose I can handle the truth, unlike Tom Cruise, but Farrow and Marantz did not warn me that Altman was homosexual, pardon me, ‘gay’, before I had to crash my mental Tesla into this sentence.

Farrow and Marantz met Altman 12 times and spent 18 months writing their piece. Condé Nast is still rich, it seems. They mention but find no evidence for the dark rumours spread by Altman’s enemies. More actually concerning is the stuff about war. They quote the Wall Street Journal saying that Anthropic’s AI chatbot, Claude, was used in the Venezuela operation. And Anthropic is the only AI corporation that even slightly cares about safety. In relation to safety standards Altman used to endorse, but no longer does, Anthropic got a D. Google DeepMind got D minus. All the others, including OpenAI, got an F.

I am writing this while my undergraduates do an exam. If I give one student D, another one D minus and the others F, I think they would think the apocalypse had already come.

Years earlier, OpenAI had deleted from its policies a blanket ban on using its technology for “military and warfare”. Eventually, Anthropic’s rivals – including Google and xAI – agreed to provide their models to the military for “all lawful purposes”. Anthropic, whose policies bar it from enabling fully autonomous weapons or domestic mass surveillance, resisted on these points, slowing negotiations for an overhauled deal.

The consequence was, Hegseth was unhappy: hence Anthropic out and OpenAI in. Ha, ha, in public Altman sided with Anthropic and in private he began a negotiation with the Pentagon to replace it.

3

The comment. This might seem enigmatic but I hope it will come to make sense.

In AI, we are meant to be creating Apollo, but we are creating Hermes – in a world without a Zeus.

The point is to contrast Apollo and Hermes.

Apollo is the lord of all he surveys. His knowledge of the world consists of what he can see. He depends on things being visible. He does not understand trickery or deception. He cannot find what is hidden. He is, fundamentally, innocent about language. He does not appear to understand that words conceal as well as reveal. Apollo is, I suppose, what we suppose AI to be when we are optimistic about it. Open. Transparent. This is the language of visibility, of seeing for oneself.

Hermes, by contrast, is a god born in darkness to a modest mother: he is the god of deception and communication. He uses language to create a second reality in order to create doubt about what is true and what is false. He is the god of words. He tricks Apollo, since Apollo, mighty as he is, cannot fathom Hermes. Hermes, always associated with snakes.

Zeus is the overlord, the master God. He tells Apollo and Hermes to be brothers, and, remarkably, both Apollo and Hermes submit to the jurisdiction of Zeus. In them there is no subversion, as there is in the story of Prometheus. Apollo and Hermes never undermine the great God. So all is well.

The relevance of all this is that AI = LLMs = are fundamentally about language. And language is not the world of the visible, the simple, the uniform. It is not Apollonian. It is Hermetic. Language is on the one hand the logos, the absolute truth (spoken about by Heraclitus and Christ), but, but, but it is also the capacity to lie: hence the opening of that vast world of serpentry that we associate with Satan, Judas and Machiavelli. Some linguists have suggested that the first words spoken by humans were probably a lie, since, until we could speak, what was evident was simply the same as what was true; but language gave us the chance to speak about the non-evident, and this was unnecessary until we had something to hide.

Altman is a liar and a hypocrite. Of course he is.

He is, therefore, the entirely appropriate figure to be Head of OpenAI.

Open, of course, when we submit it to the Hermetic LLM, means ‘Closed’.

OpenAI is Closed. Untransparent. Esoteric. Arcane. Doubly so, because it is a human corporation, and because it is a human corporation in the business of developing a vaguely humanly initiated self-organising set of protocols that amount to an independent protocol: in some sense, an independent mind, someone to ask, someone to consult, someone to do things for us that we cannot do ourselves. In other words, a Hermes.

But Hermes cannot exactly be trusted.

And that is it. In AI we want an Apollo, but we can only have a Hermes. And there is no Zeus to hold everyone to account.

4

Finally, here is a short history of humans and networking technology.

I think that the recent history falls into three stages.

1. First, we had word processors, email and internet: these enabled us to write easily, using cut-and-paste, but also to communicate with each other, person to person. It revived an epistolary tradition, at least for me. I thought all of these were very good. I would now find it very difficult to write without a word processor. Rat-a-tat-tat, change my mind: that’s a good sentence, put it first, delete all that nonsense.

2. Then we had smartphones and so-called ‘social media’: especially Twitter, but Facebook, TikTok, Instagram, the whole set. Disastrously, these turned everyone from being responsible and eccentric individuals into being posers ‘curating’ their image as if for a vast audience. This, it seems to me, vastly increased public levels of hypocrisy, also stupidity, of course narcissism and insane immediacy. I suppose I accept that without them there would have been less resistance to COVID-19; but I also would imagine that without them there would have been less impetus for COVID-19 in the first place.

3. Finally, we had AI, or Large Language Models. I have polemicised against these elsewhere (here, here and here) because they seem demonic, to partake of something of the Antichrist: being inexplicable, and magical, and technological, and offering us a Satanic deliverance that comes at the cost of our permanently trepanning ourselves. Students do nothing without ChatGPT. Academics, I have heard, thank ChatGPT in their acknowledgements. English-as-a-second-language, the customary language of academics, is now perfectly achievable by anyone, and immediately, without a sweat. Matt Goodwin, while writing in the Spectator about his experience of being called MattGPT, told us:

Every major academic, journalist and data analyst uses artificial intelligence to interrogate data and if they claim they are not using it then they are lying.

This is not true for those of us who are hostile to AI. But it is true of everyone else. Journals receive many more times the number of articles to read. Why? Because they are all written by ze machine. I heard about one ladydemic who, asked to write a conference proposal, asked the paid-for version of ChatGPT to do it, and had one ready in a few seconds. We all know about AI girlfriends and therapists. Everyone asks ChatGPT to be Horatio to their Hamlet, Juliet to their Romeo, Freud to their Wolf Man, Einstein to their Mr. Bean, William Shakespeare to their David Walliams. That is bad. But I think the evisceration of our capacity to use language – to think – is much worse. I hear stories about the use of AI by ordinary men and women with a sense of doom and fear for the species. Farrow and Marantz refer to it once. They say AI will likely cause “human enfeeblement”. Yes.

James Alexander is a Professor in the Department of Political Science at Bilkent University in Turkey.

Subscribe
Notify of

To join in with the discussion please make a donation to The Daily Sceptic.

Profanity and abuse will be removed and may lead to a permanent ban.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Curio
Curio
10 hours ago

Hard to disagree, but the takeover is inevitable. Nonetheless, Fear might help limit AI’s power. London taxi drivers don’t die of Alzheimer’s”. The same with ambulance drivers, thanks to the “knowledge” they have to pass, which obliges them to remember street names and routes. AI accelerates cognitive decline. You don’t have to remember your mum’s, son’s, girlfriend’s…telephone number, the phone does it for you. You don’t have to remember the way to take the children to school, the satnav does it (and if it breaks down, you can’t remember the way back home, thus, you are only 36, but according to MoCA (Montreal Cognitive Assessment), you already qualify for a South-facing room in the Sunset Residential Home. In our work, we have imposed a strict rule that only authentic emails are allowed within the firm. By using bots, client confidentiality is out of the window, because highly sensitive information is shared with God knows how many cybermasters. How do we catch the rebellious? Very easy. You compare a new, impressively crafted email to an old one with abuse of the apostrophe and inappropriate capital letters. The deceit is shown instantly (grammar, syntax…). In America, big firms use security services style… Read more »