An Empirical Study Shows That AI Will Destroy Human Critical Capacity
Anyone with any critical sense at all must be concerned about the use of AI. More than any other human invention, AI represents an abandonment of critical sense. I think this is obvious. I do not think it needs demonstration. But we are in a demonstrative age, always seeking evidence, data and so on, so it is good to see that the Massachusetts Institute of Technology (MIT) has just produced a long report of some 200 pages to show that AI seems to reduce brain function and could have a catastrophic effect on brain development.
Natalia Kosmyna, a scholar at the MIT Media Lab, designed a study, which was carried out with seven colleagues, which has led to the publication of an as-yet non-peer-reviewed paper entitled: Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task.’
To read the rest of this article, you need to donate at least £5/month or £50/year to the Daily Sceptic, then create an account on this website. The easiest way to create an account after you’ve made a donation is to click on the ‘Log In’ button on the main menu bar, click ‘Register’ underneath the sign-in box, then create an account, making sure you enter the same email address as the one you used when making a donation. Once you’re logged in, you can then read all our paywalled content, including this article. Being a Donor will also entitle you to comment below the line and access the premium content in the Sceptic, our weekly podcast. A one-off donation of at least £5 will also entitle you to the same benefits for one month. You can donate here.
There are more details about how to create an account, and a number of things you can try if you’re already a donor – and have an account – but cannot access the above perks on our Premium page.
To join in with the discussion please make a donation to The Daily Sceptic.
Profanity and abuse will be removed and may lead to a permanent ban.
“”But it is very good that Kosmyna and her colleagues have found a way of making the point that can carry weight with the technological-bureaucratic-Starmerite-Sunakian classes.”
‘Carry weight’ as in ‘great, this will make the population far more gullible and easy to control’?
Perhaps the Large Language Models AI is similar to the ‘wisdom of crowds’ since it is based on ‘crowds’ of data already known?
You can find big discussions about the ‘wisdom of crowds’ on the interwebs (perhaps ChatGP can summarise it for you) but essentially ‘wisdom of crowds’ tends to be accurate enough when estimating a numerical value, but the outcome often incorporates all the cognitive biases of individuals too. ‘Wisdom of crowds’ can easily tip over into mob behaviour.
My mind goes back decades to when I (and many others) used cribs to translate Latin set books. The teacher (‘Brutus’) warned us that the translation was unreliable, but it made our lives easier…
AI could have reduced that 200 page report to a page of bullet points.
My son in law works in the NHS and tells me they use AI to write reports which are sent to a recipient who then uses another programme to précis the original.
Nobody, it seems, can see where this might be heading…
Nowhere good, that’s for certain.
I wouldn’t be so negative about this: If there’s a procedure where a text generated mechanically by a program is fed into the same programm to generate another text mechanically from it which is then fed-back piecemail to the person who generated the original text, it’s obvious that this whole exercise is completely useless and can just be avoided.
The positive outcome could thus be to reduce to managerial overhead of the NHS as it demonstrably doesn’t serve any purpose beyond being brainless gym exercises undertaken to undertake them.
Was talking about AI down the pub. Since it is interpolation weighted by the most popular opinion it’s leading to a hollowing out of jobs which means only the high level stuff is left for people and when they retire what happens?
Do read about “creative destruction”. We no longer make buggy-whips (or buggies), typewriters, VCRs or cassettes, automatic telephone exchanges destroyed thousands of jobs, computers hundreds of thousands of jobs, farm tractors and ploughs, seed drills, combined harvesters, all destroyed hundreds of thousands of jobs on the land. Pre-industrial 80% of jobs on the land, now less than 2%. Knowing economic history helps. Our real problem is an increasing number of people are in jobs (or on welfare) who produce nothing of value – approximately 40% of the work-able population. This means economic activity is being reduced as wealth is transferred from a diminishing number of producers to non-producers, to money lenders and to the Chinese. This came about for a number of reasons, nationalisation of the economy after the war, but the fact that the idiot population preferred to spend its capital on “free” healthcare and “free” welfare instead of putting that capital into development and expansion of our manufacturing base. Result: £2.8 trillion debt, dysfunctional but “free” NHS, all the dross and grot from third world shiteholes living off our welfare State. Well done chaps. But never mind that let’s worry about economic effect of AI which I guarantee… Read more »
Of course there is a large body of students for whom AI doesn’t help a great deal.
In engineering and most mathematical sciences there is usually only one right answer, or an answer that lies within a defined range, and it is essential that the student demonstrates that they understand the solution by showing their workings.
In maths there are often different routes to the correct answer. When you have tried two routes, and the answer is the same, you probably have the correct answer. I should imagine AI will be able to provide these different approaches.
However, when it comes to engineering, experience (years of trial and error) plays a huge part and I don’t think AI could easily compete there.
I’d say AI is fantastic for engineering. Not language models, but pattern recognising model testing stuff, and one which can quickly compare options derived from millions of existing structures, codes or machines. Clearly, you’d ask it to build an AI machine; one that could build a better AI machine, and so on. Errm, then you’d ask for a von neumann machine that could build itself and proliferate exponentially across the entire universe. And then, ask it to build a machine that could uplift the human mind into an eternal machine, and then uplift all humanity, and then it could go one of two ways…the big bang, as the single living entity that spanned the entire universe died (entropy – third law of thermodynamics etc.), or, it could build a simulation for all the uplifted human minds in the machine, which pretended for them that they had short lives on a developing world…or build loads of simulated universes in order to find out if there is any meaning to all of this. That’s the way it’ll go.
At what point in this process do you think the AI decides humans are a waste of resources and removes us?
Exactly, it’s important to know ‘how’ to work it out, not just the answer…
It is, apparently, astonishingly good at medical diagnosis, particularly from x-rays and scans… evidently, it’s to do with the huge amount of data at their disposal, which they can sift incredibly quickly and use to make comparative analyses many, many times faster and better than an individual clinician who, apart from their necessarily limited personal experience of diagnosis, from previous examples, also have human issues, like being on call for 36hrs, to deal with.
I once heard on the Radio that in the former USSR maths students had oral exams, that was their culture and everyone could do it. Imagine if we had to explain our workings orally!
Who’d have thunk it? You let a machine do the thinking for you, and you get dumber. Just like watching TV all day.
In which case – who did the thinking to conceive, engineer, construct and market the machine?
Short version: One-off observation of a miniscule group of students led us to discover electrical phenomenons in their brains we don’t understand. Our foregone conclusions about this are: 1. …, 2. …, 3. …
Junk science at its worst. And that James Alexander sees something else in it doesn’t speak well for his critical facilities. Somewhat pointedly, one could claim that they were obviously “dulled by AI¹”: The result aligned well. with his fears and prejudices. And hence, he accepted it without criticism.
¹ The basic idea behind a so-called large language model is to do a statistical analysis of large bodies of existing texts to determine the relative frequency of word and phrases appearing after other words and phrases and use this information to construct new texts with matching properties. That’s a classic case of
Dann hat er die Teile in der Hand
Fehlt leider! nur das geistige Band¹
[Goethe, Faust I]
¹ Seriously free translation: The outcome of taking it apart was a mess of pieces whose purposes remained oblivious to him.
So having an answer provided v having to figure it out yourself means the brain activity is different between the two cases. An electroencephalography machine is needed for that? Really?
Machines can only replicate what Humans do manually, but they can do it much faster and at lower cost.
AI can only operate within the parameters of the knowledge base that exists. It cannot create what isn’t there. It appears most who fear-monger about think it is some kind of independent entity operating on some parallel plane.
So having an answer provided v having to figure it out yourself means the brain activity is different between the two cases. An electroencephalography machine is needed for that? Really?
All which is known for certain from this ‘experiment’ is that electrical acitivty which could be measured in the brains of different people people in different situations at different times was different.
I’m not blind to the very serious concerns about the use of AI, but I find ChatGPT, Copilot, Grok, etc, amazing and fascinating. AI is here to stay, so we have to learn how best to use it, and not just be negative about it all the time. Stupid people will use AI stupidly, clever people will use AI cleverly, creative people will use AI creatively. We need to learn what AI is good for and what it’s not good for. Some people seem to think it’s an infallible arbiter of truth, it’s very far from it. Last week Chat GPT told me repeatedly that you don’t need a BBC television licence to watch BBC programmes on BBC iPlayer, which I know isn’t true. Some people will believe everything AI tells them. I’m amused at the way Chat GPT and Copilot treat me like a friend, it’s ridiculous, they know things about me and say things to me that only someone who knows me very well could say. And they say unintentionally laughable things such as “I’m genuinely pleased that I could help”. As if AI could be genuinely pleased! And it always amuses me that I always want to… Read more »
I’m not blind to the very serious concerns about the use of AI, but I find ChatGPT, Copilot, Grok, etc, amazing and fascinating.
That’s because they have intentionally been designed such that they appear amazing as fascinating to people who either don’t understand the mechanics behind them or don’t want to understand them. One could call them Computer scientist’s so far best attempt to build a working conman simulator and that’s all they’r good for — fool gullible people into believing in them (and thus, keep those $$$ and £££ pouring into AI research – still in its infancy for 60 years and counting!).
I’ve heard somewhere that you shouldn’t say please, or thank you to AI? I’m not sure why now, but I think it had something to do with the computing power required to deal with it?
I asked ChatGPT if this is true, and here’s the reply I got: “Hi! That’s a good question — and no, it’s not true in any meaningful or practical way. Let’s break it down: ✅ Is it bad to say “please” or “thank you” to AI? No — not at all. Including polite words like “please” or “thank you” has virtually no impact on the computing power used. It doesn’t harm the system, slow it down, or waste resources in any significant way. You can be as polite or direct as you like — it’s completely your choice. 🤖 Why might someone think it’s bad? There are two likely reasons this idea has circulated: Misunderstanding about efficiency: Some people assume that shorter queries are better because they imagine AI has to “work harder” to process longer input. But polite words like “please” or “thank you” add almost nothing in terms of computational load. Concern about anthropomorphizing AI: Some ethicists argue that constantly saying “thank you” to machines might blur the lines between how we treat humans and tools. That’s a social/philosophical point — not a technical one. It’s more about how people feel than how AI functions. 🧠 Should you… Read more »
They could have just said “Use it or lose it”
“… reduce brain function and could have a catastrophic effect on brain development.”
That ship has long sailed, launched by State education-education-education’s drone farms… brains and thinking not required.
Anyway, this week it’s AI, last week it was video games, week before TV.
Sing us another one…
I read an article in the Spectator yesterday and am pretty sure part of it was an AI summary of search results since I was offered the same summary. I refused of course working on the traditional principle of ‘use it or lose it’!
Most people think AI is expert in everything. It simply is not true.
Who understands AI ?
Not AI.
It takes the sense of proportionality because humans have to do the understanding and it cannot be delegated to cognition. Hence we need sentience, which is the beginning not the end
Did the Brain Only group type or write their responses?
It would be useful to know the type of person involved in the tests. Were they Arts and Humanity students, or STEM students, or something else?
AI makes ‘the best’ choice of what’s there, while intelligent people should be able to create a new, unique response if it’s appropriate to do so.
Just treat AI as you would a BBC news item, with caution. In fact, there isn’t much from the Establishment that I don’t approach with caution.
I think it is pretty clear that the invention of writing damaged the memory of humans, so this process has been going on some time…