Research paper: AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

I have been in talks with a few organizations I partner with professionally, many have a “sustainability” angle, elucidating to them the reality of energy usage in AI image generation, which is an enormous concern generally with AI, but the image generation is a maniacal energy hog—300-500 watts per image generated on average (as per OpenAI’s documentation), essentially equal to running an LED lightbulb for a few days. Just another thing to thing about :slight_smile:

4 Likes

If we are talking about the USA here especially, and certainly more Western nations are inlcuded in what I am about to say, critical thinking has been undergoing a dismantling for some time now. Those who are in power do not need critical thinkers, they need factory workers and those willing to the State’s bidding in various ways. Technology in general has disempowered people in various ways from having any control over their own realities. Sure, some technology and advancements are helpful, but many are nefarious in their nature, and only do harm.

My hope with AI and the rise of very convincing disinformation and misinformation rooted in generative technologies, is that there is a movement toward a return to localism, where people are actually interacting and making a point to go out into the tangible physical world and do things, rather than spend as much time as they do now inside of glass worlds—phones, computers, social networks, etc.

Although now those of us who are onto the AI game, maybe a bit more than the masses, can recongize the amount of actual fake content out there, it is becoming worse and more prolific, and sooner or later it will be undeniable that the internet is just not a reliable source of information … which has it ever been? But, it will be much worse as the years progress and AI generative content becomes more convincing and manipulative. Just think of Meta who trialed a series of AI profiles and users, sure they labeled them and they were very strange, but as this technology is perfected and used for manipulative purposes, we will not know (to an even worse degree) if we are interacting with actual humans, the line now has already been blurred.

A mad world, indeed.

It’s frustrating to me that the various forms, and uses, of “AI” can get so conflated. There are many applications where AI tools are labour- and tedium-saving. For example, scanning x-ray images as part of cancer diagnosis (the flagged results are then of course, checked by a human). Similarly, many research areas involve searching enormous data sets (such as subatomic particle collisions at CERN, etc) to locate particular types of events. This frees up the researchers from the tedium of the searching, and allows them to spend more time doing actual critical thinking about the data.

Of course, just as we admonish students reaching for their calculator to evaluate 2+2, we admonish them for using AI to try to avoid the effort of learning how to apply Newton’s laws. But, once they have mastered the material, we don’t expect them to calculate solar system dynamics by hand.

2 Likes

They get conflated because there is a portion of society deeply fearful and reactionary against AI. They have dedicated and charismatic leaders who are determined to fight AI as it has been labeled by them a boogeyman. Fear and reactionary instincts they inspire inhibit critical thinking.

It can lead those who despise AI - to at the same time - describe it as a simple recipe for transforming input to output of unicode glyphs that should not be anthropomorphized whilst simultaneously worrying it will target your home with missiles.

Any possible good consequences of AI are immediately either doubted or if that doesn’t work dismissed as “not AI” lest it threaten the overarching boogeyman story.

:pray:

1 Like

I was talking with some people at lunch yesterday and had the realisation that people trust artificial intelligence far more than they trust artificial sweetener.

If we think of AI as the aspartame of intelligence, it’s fake intelligence. Delusion. Yet, the same person who is likely to scorn my PepsiMax is likely to praise AI. :person_shrugging:

2 Likes

Lack of understanding and, even more sadly for the subject we are discussing, lack of critical thinking.

My hidden by community post consists of a simple request to DeepSeek: “Analyse sentence ‘AI, as exists now, will only diminish critical thinking, not enhance it.’ for the presence of potential cognitive errors” and its result. - The very example, the fact, of how AI can actually be used to improve critical thinking.

I highly recommend everyone to try such a request to better see how sad the whole situation is with critical thinking about AI.

1 Like

mikenz66:
…actual critical thinking…

This entails there’s non-actual critical thinking going on. Did you mean to imply that the non-actual critical thinking is being done by AI?

As far as conflation goes, it would be helpful to those who don’t know the difference between , say, AI and AGI, to indicate which type of AI posters are referring to.

I find it insulting to whomever makes the rules for this forum to suggest that they’re not thinking critically about AI, or failed to think critically or critically enough when making rules like that from FAQ 39. They are thoroughly thoughtful people, and strive to do anything the wise would disapprove of.

2 Likes

@anon4927160, who said:

…the internet is just not a reliable source of information …

Including the information contained in the above quote? (I’m not just trying to be cute and/or clever, either.)

100%. We are on a Buddhist forum after all, so can we just take a moment to consider if anything is truly reliable or stable? I think not.

Also, AGI is not a thing yet (and may never be) as I understand the current research, and as many have mentioned ChatGPT and DeepSeek, I would think we are just talking AI generally. But I could be wrong.

We’re not on a Buddhist forum. Were on an EBT forum which is on on the internet. It’s fundamentally paradoxical to try to demonstrate something as informationally unreliable by using the thing that’s currently being accused of being informationally unreliable; it’s not likely to sway the likers, dislikers, lovers, haters, the indifferent, or those who agree or disagree with the finding of Venerable’s OP citation much one way or another.

Most of what I know about AI comes thought experiments via sci-fi literature and film, my experiences with chat bots I relayed above, and Stuart Russel’s excellent 2021 BBC Reith Lectures, Living with AI. Russell argued that if AI is ever to fulfill the bogey-man prophecies, it’ll come from AGI, not AI. Personally, chat AI, or AI used for data science doesn’t scare me at all. It just sucks at helping with writing, reinforces tropes about what “good” writing is and isn’t, and fosters mediocrity and intellectual laziness in creativity and critical thinking. However, I do think those who see current acceptance of AI as greasing the slippery slope to the dangers of AGI as worth considering, especially when they come from folks I consider far wiser than I.

Sure. Whatever you say. This forum has really lost what it once had.

A car that works, but once in awhile has an issue, could be called an “unreliable car,” therefore calling something unreliable doesn’t necessarily mean it doesn’t, at certain, or even most points in time serve its general purpose.

But, I digress.

It can be hard when someone criticizes our spiritual heroes or those we hold in high esteem. I too have felt personally insulted in the past when this has happened. OTOH, reflecting on the fact that even my spiritual heroes are not immune from receiving criticism has helped me understand that no matter my spiritual progress I too will not be immune from criticism.

When even the Teacher himself was criticized in his life time and certainly since then what a foolish hope have I to think that I might progress past receiving criticism. This line of reflection has helped me greatly and perhaps it could help you?

It is possible to receive criticism - even unwarranted criticism full of malice - while not suffering the arrow of personal insult. It is possible to look at the criticism with a sober mind and to analyze the criticism for anything that might be of help. Even if the criticism is not directly helpful it can be used to generate compassion for those who have authored it with ill intent. In this way even malign criticism can be transformed into a further step on the path.

:pray:

I wasn’t being glib. You said the internet was unreliable on the internet. That’s confusing at best. I wouldn’t know about what this forum once had. I’m new here. I hope you aren’t suggesting I’m emblematic of your perception of its decline?

I was hoping you’d say more about why AGI might never be a thing.

A Cretian walks up to you and says, “All Cretians are liars!” … do you believe them? :joy: Poor Epimenides :rofl: :pray:

Greetings, we appreciate our enthusiastic forum participants! Please stay on topic with the Original Post. Thank you :slightly_smiling_face:

1 Like

I’ve followed AI pretty closely, just because I find it interesting, and I am inclined to say that I am not an AI doomer, although I sympathize with their views. I think ruin will find us via ecological catastrophe much sooner than AI will come for us in our sleep.

AI itself has always been something that is not quite here yet, in sci-fi discourse and tech bro land, and even with the advances of LLMs, and now the newer “reasoning models,” AI is still ever-changing in how it defines itself and all of that.

I tend to think that there is surely going to be an earthquake of sorts as AI relates to various aspects of society and culture, and people will certainly be out of jobs on top of what it will do to the already deteriorated social and cultural condition of the world.

AGI I totally understand the concept, I just fail to truly believe that “human intelligence,” even in its most broad and ambiguous definitions, will ever be reached by a machine. Its ability to “fool” people will certainly increase, and sadly LLMs have already demonstrated particular probabilities for people to become strangely connected to it, even commit suicide based on conversations with it.

I certainly don’t discount the power or LLMs, they are truly impressive and can help with research and other things, but as with technology, they can become addictive and just help us bypass recall, avoid real learning, and move people away from thinking for themselves even further.

Thank you @rcdaley. That was informative. I learned a lot. Do you think the authors of the study @Sujato cited accurately characterizes the relationship between cognitive offloading and critical thinking in general, and particular when interacting with AI tools like LLMs? And do you think critical thinking is an aspect of dour intelligence that AGI will never reach?

I perused the study. The same issue arises with this study that plagues many studies, I will get there in a minute.

So, basically I am took away from this study the claim that an over-reliance on AI tools and such may lead to a decline in critical thinking skills, primarily due to cognitive offloading. They seem to take a position that there is a need for strategies that encourage critical engagement with AI tech, and that people develop their own analytical skills, etc. etc. etc.

Okay, so let me just define cognitive offloading, at least how I am going to use it based on modern cognitive science and neuroscience…

Cognitive offloading is the use of external resources to store, process, or manipulate information, thereby reducing the need for internal cognitive effort. This concept is central to extended cognition theory, which argues that cognition is not confined to the brain but distributed across the mind, body, and environment. It is also important to state, and again, only from my understanding from books/podcasts/etc., that cognitive offloading is neither inherently good nor bad —it is a fundamental aspect of human cognition that reflects how our brains optimize resources through interaction with external tools, including LLMs and “AI” in this case, generally speaking.

There is also debate on cognitive offloading in the field, and the primary argument can be made about digital technology like phones as well, that revolves around whether reliance on external tools (particularly AI and digital technologies) weakens cognitive abilities or enhances problem-solving. There are critics, again from my understanding, that argue that excessive offloading reduces mental effort, leading to skill atrophy in areas like memory, spatial reasoning, and, as we are discussing, critical thinking. These individuals worry that automation may discourage deep engagement and independent thought, this argument seems to make perfect sense to me.

Now there are of course proponents that suggest that offloading routine tasks frees cognitive resources for more complex reasoning, creativity, and abstract thinking, aligning with various theories of distributed and extended cognition … essentially, they argue, rather than diminishing intelligence, AI and digital tools may serve as cognitive amplifiers—if used “mindfully” as one study stated, at least that was mentioned in a recent podcast I listened to.

So, the key question remains: does offloading lead to dependency and decline, or does it enable new forms of intelligence and problem-solving? Very complex area of thought and quite pertinent to these advancements we are seeing. Although, it does appear many in the field believe we are seeing diminishing returns on the advancement of the technology, etc.

There is obviously a whole issue to be raised as mentioned within this study, because while it raises valid concerns, it faces the common research challenges seen in many studies, including self-reporting biases, difficulties in measuring critical thinking, and conflating correlation with causation. It also seems to overgeneralize age-related trends.

Oversimplified conclusions risk missing the complexity of human-AI interaction, and also there is all this life experience of the hilarious amount of participants… I could go on forever so I will stop. But yeah. Lots to worry about with AI, and honestly we most likely need global regulation because it is going to get out of hand.

This is all completely my own understanding as somebody who is not a professional in any of the fields I mentioned above, and am simply regurgitating things as I understand them.

Thank you for your thoughtful summary and analysis of the study.

@anon4927160 said

…the common research challenges seen in many studies, including self-reporting biases, difficulties in measuring critical thinking, and conflating correlation with causation. It also seems to overgeneralize age-related trends.

These types of problems–and oh so many more–are endemic in the social sciences and medical sciences (and even in the so-called gold standard RCTs, whether double blinded or not). As an epistemological anarchist, I’m not surprised, but as an advocate and practitioner of critical thinking who all too frequently observes teens and young adults walking the streets and aisles of grocery stores with their eyes glued to their phones, it greatly concerns me for our future, even more so than climate (to all: please don’t single this out to debate here; if you’d like to start a new topic, I’ll gladly join you). In this study, though, the way they organized the correlations was one of the more creative attempts I’ve seen to get around the causation deficit and shows a sophistication about the issue usually elided is studies. The “causes aren’t correlations” mantra pushed the pendulum in a much needed direction, but in the process it has caused us to overlook the use of correlations as clues about causation. These researchers stacked it up in a way that not only points our magnifying glasses in promising directions, but also does so in a way that supports the wisdom in voicing concern about what AI and digital technology are doing to our behaviors. And at my age, one is always trying to balance not coming off as a Luddite versus trying to impart such wisdom to the proverbial “kids these days,” and I confess I’m at a loss as to how to even get studies like this on their radars. Yet, if I could, I would, despite the methodological shortcomings, because that per se might invoke critical thinking. And while I know how to get this article on the radars of my old hippie and/or punk rocker friends now completely immersed in corporate cultures and shockingly gung-ho for AI, it seems like they (well, the two I have in mind anyway) are maybe too far gone already–at least judging by previous attempts to engage them in discourse about my concerns about AI now and its implications for AGI, or whatever we end up calling it, in the future. There might be hope for the one I mentioned already (the guy who fed my writing piece to an AI behind my back), as he’s still open to receiving my observations not only about how his ethical intentions generally align with the five precepts, but also how the EBTs and practical instructions therein have relevance to the stress he feels as a leader in a digital corporate world becoming rapidly mesmerized by AI in all its forms.