The findings revealed a significant negative correlation between frequent AI tool usage and critical thinking abilities, mediated by increased cognitive offloading. Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. Furthermore, higher educational attainment was associated with better critical thinking skills, regardless of AI usage.
One does wonder about causation here. I assume people with critical thinking skills know better than to rely on AI?
Yeah, is there anything in the paper that addresses this? Iâve just skimmed it, and a lot of methodology goes over my head TBH.
At the end they acknowledge that itâs a limitation of their correlational analysis:
Experimental studies that manipulate the level of AI tool usage and measure resultant changes in critical thinking performance could offer causal evidence of the relationship between these variables.
In my opinion the process of solving questions is something that individuals can lose if tools such as AI are established from the primary level.
Learning to solve a problem is not just about knowing the solution but learning to walk the path to the solution.
Hmm, â666â participants?
My bullshit meter has gone to the red zone. The article talks about HCTA which has 25 questions, but their questionnaire only asks 6 questions pertaining to critical thinking, and these are so generic anyone can bullshit the answers.
I wonder if this was written by AI. Maybe the paper itself is a test whether the reader has critical thinking skills or not.
Greetings Bhante,
Have you ever tried to have a discussion with a chat AI robot about philosophy? I have. Theyâre more interested in not offending ANYONE and providing vanilla, mediocre, right down the middle information than thinking critically about anything. If folks are to rely more and more on them, they will likely, via Thorndikâes Law of Effect or Skinnerâs operant conditioning, etcâŚ, lose their own abilities to think critically.
Recently, a professional geek friend of mine, after I asked his opinion on a piece of creative writing Iâm working on, ran it through his fave AI chat bot. (Very naughty of him, too, cuz he knew my low opinion of AI). I was appalled at the AIâs advice. Again, I received nothing but mediocrity driven notes from the bot. It was completely unable to recognize that I was experimenting with hybrid forms and envelope pushing ideas. Everything it noted Iâd already anticipated.
The real threat of AI is the death of critical thinking. I hope itâs not too late, but I think AIâs already taken over, and most people are just to mediocre in their own thinking habits and/or apathetic to recognize it or care.
best,
~l
Precisely. Itâs a lack of critical thinking about truisms like, âBrains are like computers,â thatâs helped get us into this mess. The truth is that computers are like brains; we will always be smarter than AI. But alas, intelligence does not guarantee agency and self determination. If I believe a machine is smarter than me, Iâm much more likely to submit to it. It might not even have to threaten me, especially if I forget that it originated in human sentience.
Theyâre not interested in anything. They are machines. They take an input of a series of unicode glyphs and eject another series of unicode glyphs according to a probabilistic weighting. Their masters create them in order to fool you into thinking you are having a âdiscussionâ âaboutâ something. We shouldnât serve those masters by using anthropomorphic language!
I mean, thatâs a threat. But if an AI takes away your welfare, or refuses you insurance, or targets your home with a missile, youâd probably think thatâs a more immediate threat than âdecline in critical thinking capabilitiesâ.
Yeah, AI geeks weirdly imagine that anyone else is going to be interested in spending their time reading some machine gunk, and take it quite personally when youâre not. Honestly, we need to normalize not just not using AI ourselves, but not taking seriously anyone who does use it. Itâs just brain rot.
@Sujato said:
Theyâre not interested in anything. They are machines. They take an input of a series of unicode glyphs and eject another series of unicode glyphs according to a probabilistic weighting. Their masters create them in order to fool you into thinking you are having a âdiscussionâ âaboutâ something. We shouldnât serve those masters by using anthropomorphic language!
Iâm so busted. My pride in being a non-mediocre critical thinker blinded to me my anthropomorphizing them. But itâs also to my point. AI is already insidiously influencing my behavior.
@Sujato said:
I mean, thatâs a threat. But if an AI takes away your welfare, or refuses you insurance, or targets your home with a missile, youâd probably think thatâs a more immediate threat than âdecline in critical thinking capabilitiesâ.
Granted. Iâll amend that to say, itâs a groundwork threat.
Iâve known this guy for almost forty years. Heâs so smart in so many waysâexcept this. What do I say, âHey, buddy, I just canât take you seriously anymore. This AI thing is too much.â?
I will be sending him a link to this article, though.
@Khemarato.bhikkhu said:
One does wonder about causation here. I assume people with critical thinking skills know better than to rely on AI?
@Sujato said:
Yeah, is there anything in the paper that addresses this?
Yes. Especially, 4.7. Results from the Interviews. (Sorry about my quoting style and the multiple postsâstill trying to get used to this forumâs functions. Iâll figure it out soon.).
Indeed. I was dismayed recently to see Buddhist Door is now using AI images on their site.
I sent an email to their editor explaining that I will:
boycott any publication that features AI content. If your articles arenât worth finding real photos for, they arenât worth my time either.
They agreed to replace that image. Looking at their homepage now though, I see that their latest article is, once again, adorned with an AI image.
In sociology methodology, we were told not to ask direct questions like âAre you biased?â (which this paper seems to have only a few) but rather ask questions to see if they would lead us to conclude the answers ourselves (which, to be honest, this paper seems to be doing).
Asking a few direct questions lead by questions to ascertain behavior patterns would also be an indicator of how much of a critical thinking said participants would be applying to their own biases.
Simply put, if a person says they donât have any biases, but they also say that they âOnly use AIâ or âOnly use Fox Newsâ then it seems not only they are biased, but they donât even know theyâre biased.
666 is funny though.
People who delight in critical thinking will use AI to enhance their critical thinking to get even more pleasure from it. People who delight in not thinking critically will use AI to reduce the need for critical thinking in order to get even more pleasure from not thinking critically.
AI, as exists now, will only diminish critical thinking, not enhance it.
A post with a practical example of how AI can be used to enhance critical thinking, in a thread about the impact of AI on critical thinking? - Immediately âflagged by the communityâ and hidden.
Discuss & Discover Forum members may not post AI content verbatim except for the very few exceptions noted in FAQ39. Moderators will immediately take action on such posts.