Research paper: AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking

The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage.

https://www.researchgate.net/publication/387701784_AI_Tools_in_Society_Impacts_on_Cognitive_Offloading_and_the_Future_of_Critical_Thinking

10 Likes

One does wonder about causation here. I assume people with critical thinking skills know better than to rely on AI? :thinking:

5 Likes

Yeah, is there anything in the paper that addresses this? I’ve just skimmed it, and a lot of methodology goes over my head TBH.

2 Likes

At the end they acknowledge that it’s a limitation of their correlational analysis:

Experimental studies that manipulate the level of AI tool usage and measure resultant changes in critical thinking performance could offer causal evidence of the relationship between these variables.

4 Likes

In my opinion the process of solving questions is something that individuals can lose if tools such as AI are established from the primary level.
Learning to solve a problem is not just about knowing the solution but learning to walk the path to the solution.

3 Likes

Hmm, “666” participants?

My bullshit meter has gone to the red zone. The article talks about HCTA which has 25 questions, but their questionnaire only asks 6 questions pertaining to critical thinking, and these are so generic anyone can bullshit the answers.

I wonder if this was written by AI. Maybe the paper itself is a test whether the reader has critical thinking skills or not.

4 Likes

Greetings Bhante,
Have you ever tried to have a discussion with a chat AI robot about philosophy? I have. They’re more interested in not offending ANYONE and providing vanilla, mediocre, right down the middle information than thinking critically about anything. If folks are to rely more and more on them, they will likely, via Thorndik’es Law of Effect or Skinner’s operant conditioning, etc…, lose their own abilities to think critically.

Recently, a professional geek friend of mine, after I asked his opinion on a piece of creative writing I’m working on, ran it through his fave AI chat bot. (Very naughty of him, too, cuz he knew my low opinion of AI). I was appalled at the AI’s advice. Again, I received nothing but mediocrity driven notes from the bot. It was completely unable to recognize that I was experimenting with hybrid forms and envelope pushing ideas. Everything it noted I’d already anticipated.

The real threat of AI is the death of critical thinking. I hope it’s not too late, but I think AI’s already taken over, and most people are just to mediocre in their own thinking habits and/or apathetic to recognize it or care.

best,
~l

Precisely. It’s a lack of critical thinking about truisms like, “Brains are like computers,” that’s helped get us into this mess. The truth is that computers are like brains; we will always be smarter than AI. But alas, intelligence does not guarantee agency and self determination. If I believe a machine is smarter than me, I’m much more likely to submit to it. It might not even have to threaten me, especially if I forget that it originated in human sentience.

They’re not interested in anything. They are machines. They take an input of a series of unicode glyphs and eject another series of unicode glyphs according to a probabilistic weighting. Their masters create them in order to fool you into thinking you are having a “discussion” “about” something. We shouldn’t serve those masters by using anthropomorphic language!

I mean, that’s a threat. But if an AI takes away your welfare, or refuses you insurance, or targets your home with a missile, you’d probably think that’s a more immediate threat than “decline in critical thinking capabilities”.

Yeah, AI geeks weirdly imagine that anyone else is going to be interested in spending their time reading some machine gunk, and take it quite personally when you’re not. Honestly, we need to normalize not just not using AI ourselves, but not taking seriously anyone who does use it. It’s just brain rot.

5 Likes

@Sujato said:

They’re not interested in anything. They are machines. They take an input of a series of unicode glyphs and eject another series of unicode glyphs according to a probabilistic weighting. Their masters create them in order to fool you into thinking you are having a “discussion” “about” something. We shouldn’t serve those masters by using anthropomorphic language!

I’m so busted. My pride in being a non-mediocre critical thinker blinded to me my anthropomorphizing them. But it’s also to my point. AI is already insidiously influencing my behavior.

2 Likes

@Sujato said:

I mean, that’s a threat. But if an AI takes away your welfare, or refuses you insurance, or targets your home with a missile, you’d probably think that’s a more immediate threat than “decline in critical thinking capabilities”.

Granted. I’ll amend that to say, it’s a groundwork threat.

1 Like

I’ve known this guy for almost forty years. He’s so smart in so many ways–except this. What do I say, “Hey, buddy, I just can’t take you seriously anymore. This AI thing is too much.”? :grimacing: :laughing:

I will be sending him a link to this article, though. :smiling_imp:

@Khemarato.bhikkhu said:

One does wonder about causation here. I assume people with critical thinking skills know better than to rely on AI? :thinking:

@Sujato said:

Yeah, is there anything in the paper that addresses this?

Yes. Especially, 4.7. Results from the Interviews. (Sorry about my quoting style and the multiple posts–still trying to get used to this forum’s functions. I’ll figure it out soon.).

1 Like

Indeed. I was dismayed recently to see Buddhist Door is now using AI images on their site.

I sent an email to their editor explaining that I will:

boycott any publication that features AI content. If your articles aren’t worth finding real photos for, they aren’t worth my time either.

They agreed to replace that image. Looking at their homepage now though, I see that their latest article is, once again, adorned with an AI image. :roll_eyes:

1 Like

In sociology methodology, we were told not to ask direct questions like “Are you biased?” (which this paper seems to have only a few) but rather ask questions to see if they would lead us to conclude the answers ourselves (which, to be honest, this paper seems to be doing).

Asking a few direct questions lead by questions to ascertain behavior patterns would also be an indicator of how much of a critical thinking said participants would be applying to their own biases.

Simply put, if a person says they don’t have any biases, but they also say that they “Only use AI” or “Only use Fox News” then it seems not only they are biased, but they don’t even know they’re biased.

666 is funny though.

1 Like

People who delight in critical thinking will use AI to enhance their critical thinking to get even more pleasure from it. People who delight in not thinking critically will use AI to reduce the need for critical thinking in order to get even more pleasure from not thinking critically.

3 Likes

AI, as exists now, will only diminish critical thinking, not enhance it.

A post with a practical example of how AI can be used to enhance critical thinking, in a thread about the impact of AI on critical thinking? - Immediately ‘flagged by the community’ and hidden.

1 Like

Discuss & Discover Forum members may not post AI content verbatim except for the very few exceptions noted in FAQ39. Moderators will immediately take action on such posts.

7 Likes