Free Robot Cyborg photo and picture

“It’s not that I’m so smart, it’s just that I stay with problems longer.” Einstein

Here, the wise physicist who proclaimed the “modest life” was channeling something gritty about mental perseverance. For example, you could persevere through the rest of this essay or maybe just pick up your smart phone and move to the next thing. This is our dilemma today, an utter intolerance for staying with problems, challenges, or really much of anything at all so: click, click, click.

Our irritation at inconvenience is more palpable than ever and instant gratification has never been quite so, well, instant. So instead of staying with a problem, we simply offload it, copy and paste it, into ChatGPT or any of its brethren. Cognitive offloading. Instant answers. Everybody is doing it!

Now plenty has been written about the virtues of artificial intelligence for vacation planning, recipe making, foreign language learning and radiology interpreting. A properly programmed chat-bot tutor aids students with patience, precision and accessibility. There are trade-offs. Artificial intelligence directed at cancer cures is as inspiring as those technologies are terrifying when aimed at biological warfare. There is a potential to “solve” environmental problems even as AI Centers require godawful amounts of energy with negative environmental consequences. We will always have problems despite AI enthusiasts claims for utopia.

Let’s return to Einstein’s point and staying with problems: “Stay with it!” This is the encouragement from parent to child, teacher to student, and therapist to client. In all three domains growth often comes by staying with challenges and learning from that (sometimes) nettling process. Learning requires exertion and effort, qualities essential to knowledge acquisition honed over tens of thousands of years. Back to offloading.  It forecloses opportunities for growth and learning in exchange for quick and easy answers that present a compelling illusion: that artificial intelligence is our own intelligence. We feel smart and competent. But ironically, emerging research shows diminished intellectual abilities with AI, particularly a loss of critical thinking: A world feeling smarter, even as it is getting dumber.

An added and unfortunate twist: cognitive surrender. Arguably more insidious than intentional offloading, we surrender without quite knowing it. We give up our intuitive and deliberative capacities as we adopt AI information. No questions asked. One research study found that people followed wrong AI answers 80% of the time. Unlike the decision to offload, surrender means giving up and giving in–not to our own brainpower but to that of a “machine”. Surrender, as in we lose.

There is something called the automation bias that applies here. We tend to believe machine-generated answers more than information from traditional sources like books or (wise) people. The EdTech revolution of the last four decades saturated classrooms with computers and tablets. And with smart phones on a desk, in a pocket, or a purse. Research shows that just the proximity of a smart phone reduces concentration! Many attribute student achievement declines to these technologies…and to social media (a story for another day). Artificial Intelligence is the latest and greatest tech-threat to our own intelligence. The disruptions to education are already immense. Students are using ChatGPT and similar technologies to write essays, term papers, and complete online exams. They are not learning course subject matter, and they are (rapidly) forgetting how to think.

Of course, that means that schools themselves must be at the forefront of caution and regulation, right? Nope. Ohio State University will require an entire course and numerous workshops on the use of AI for its students. California State University has contracted with Open-AI for something ominously called ChatGPT Edu…an oxymoron of sorts. This provides over 450,000 students and 63,000 staffers with a premium version of the platform. For those of us who can remember the aforementioned EdTech Revolution, Apple computer school discounts and giveaways in the 1980’s, this is true déjà vu…the nightmarish kind. Not to be outdone, California Community Colleges will support “human-centered AI grounded in equity, accountability, and student success with faculty, staff, administrators, students, and partners to expand access”. How do we reconcile these lofty sentiments with lots of offloading and surrender? Cynics wonder how this trillion-dollar industry seems to exert such influence over our colleges and universities.

Some argue for a moratorium on these regressive steps by school administrations. In the style of best practice research, there should be a review of the scientific literature that has already exposed scores of problems and flaws with this technology in educational settings. There should be open and honest meetings with stakeholders-more students and teachers; fewer administrators and industry “experts”.

All of education should pause. No more AI implementation. We need to stay with this problem a little longer.