We talk a lot and occasionally to good ends. Perhaps we do not appreciate our communicating gifts, in part because speech is so pervasive. And listening so scarce!
Grasshoppers hum out to their fellows with rapid strokes of antennae or wings. Ants deposit a pheromone trail for safekeeping: to guide and elude predators and pursue food sources. Pheromones you may have heard are also pretty good for attracting mates! Safety, food and sex are high stakes for tiny creatures and big ones too. One wonders why insects are so good at communicating even as humans can be pretty darn bad at it. Perhaps, unlike us, grasshoppers truly hear the sounds of their sidekicks.
When did talking rise to the pinnacle of our social experience? When did listening descend to a postscript? Our world of Talkers seems to louden after each Tech advance from telephones, to the internet, to social media. Such cacophony! Now most everyone has a platform, demanding influence, likes and listeners. But with everyone expressing, no one is left to be receptive, to receive; to bear witness.
The Psychotherapy Room occupies a sacred space in this communication paradigm. It’s blessed like a Priest’s Confessional or the Sacred Circle of a Shaman or like going alone to the teacher in Dokusan. Special communications join and knowledges exchange. Listening is integral. Clients in psychotherapy enjoy, muddle through, and struggle with these exchanges. But there prevails a special overarching experience: to be heard, to be witnessed. Some clients realize that whether by family or by friends or by colleagues they have never or only rarely been heard. Think of it! And then ponder about how increasingly normalized this tragedy has become in our world of Talkers.
Ironically, the deep desire to be heard elevates loquacity. The more we want to be heard the more we seem to speak. And the more conversations take place in vitro (of the mind) and not in vivo. Inner monologues and dialogs are fine only if they do not displace communicating with another: I-Thou versus I-I. Picture a world of neurotics mistaking interior me-to-me conversations for the real thing.
This delusion has exploded today across social media mirages. Ours has become an asocial and lonely world. Legions have turned away from their human counterparts only to nuzzle with human illusions transmitted on smart-phones. Many more will look to artificial intelligence and chatbots to try to nourish social and belonging needs. But there is no there there. Tiny pockets of society are already collapsing because of it…youngsters are more vulnerable but all are at risk. We are starving for each other.
Sometimes, when things fall apart, opportunity arises for something new. Perhaps we now arrive at such place and such a time; a time to be still and listen deeply; to gaze into the eyes of another; to smile… and give a gentle hug.
We live in some kind of mix of wonder and terror, with so many fed-up people swearing off the news… and becoming hooked on their smart phones. Evolutionary biologists have called ours an age of hyper-novelty, whereby our novelty attracted brains bounce from this to that in an ever-accelerating media world bent on attention capture. Minds captured by technologists who manipulate our deepest emotions, like fear and envy and anger. Yep, our mind, our very intelligence, is trapped by clever bells and whistles into repetitive phone searches and doom scrolling. And when you reflect upon it, you are not always sure just why you picked your phone up in the first place.
And it is about to get a lot worse.
Let’s start in the distant past for a chance to understand where we are now, and where we might be going. And by the way, no expertise in evolutionary science is claimed here, so the following is advanced from the perspective of just a sincere student.
Nearly two million years ago, Homo Habilis roamed this earth, Africa to be exact. He was so-called for his ability to use stone tools like large animal bones for butchering. The little fellow stood 3 ½ to 4 ½ feet tall, and his body was somewhat apelike.
This short-legged, long armed being is invoked here to exemplify one of the earlier members of the Homo genus, and an indirect ancestor to ourselves. We are Homo sapiens and hopefully we have learned a few things about tools over the past 300,000 years. Homo sapiens implies “wise man”. How wise human beings have been with the tools they create could occupy volumes of pros and cons. Think hammer and plow; printing press and steam engine, think machines of war. Yes, it’s a mixed bag.
Now, think computers and artificial intelligence, and feel the tilt of the room, if not the whole planet.
A central thesis in this little essay is that wisdom takes some time… and that we are behind schedule. This proposition is best explained by the esteemed Myrmecologist who argued that “The real problem of humanity is the following: we have paleolithic emotions; medieval institutions; and god-like technology”. What could possibly go wrong?
AI developers have been obsessed with developing the god-like tech of intelligence, AGI. Theoretically, artificial general intelligence will surpass human capacities in, well, everything. And then there is super intelligence that can, it is said, solve all problems. You have probably heard of the high-stakes race to develop this god-like technology. While it is sometimes represented as a panacea (there will be some of that), it is being created by people with Paleolithic emotions…a room full of Homo Habilis minded chief executives. Think job loss, cognitive decline, autonomous weapons. Nightmares will outnumber panaceas. By a lot.
Motivated by avarice and ego, they race forward at breakneck speed with no brakes. They seem to wrongly believe that developing intelligence is merely a matter of evermore information and large language modeling. This dangerous race is like something out of middle school…Sam must beat Elon…Elon must beat Dario. But there is a point not considered or perhaps just suppressed because it will only slow these ambitions down.
You see, while intelligent, these very smart AI’s are not wise, nor will they ever be. This reflects the intelligent but unwise minds of those creators. AI creators seem not to understand that when wisdom does take hold, it is more than an expanding cranium. It is that larger brain but so much more: development across vast expanses of time with a visceral exertion to survive, and send our genes down the road for one more generation. That is what has grown whatever wisdom humans might take credit for across religions and philosophies and mythologies. Technologists can set in motion the ingredients but only time and (right) effort will bake the evolutionary cake. AGI will mimic Lao Tzu without the wisdom of the Old Master. Lao Tzu with hallucinations.
Perhaps this error in conceptualizing intelligence and wisdom relates to a prejudice most of us can understand. We all knew a smart kid growing up. Maybe we were that kid. And we have been swarmed with stereotypes about intelligent people and unintelligent people our whole lives. Status and accolades surround the intelligent; there is understandable pride and, quite often, the social trappings of success. AI creators occupy that space.
All this begs the question, what about the unintelligent? What about people with intellectual disability and what about people with dementia and other cognitive impairments? There has always been a societal tendency to stigmatize and shun persons with mental statuses like these. Our (false) belief system about these people promotes disrespect, indifference and abuse. Here we find ourselves facing square-on an under-discussed prejudice, intelligism–A bias against persons with cognitive limits or impairments, based on an over-valuing of intelligence itself.
Of course, it matters when impairments happen, the onset, be it at birth, midlife, or in later years. Congenital disorders seem to be part of Identity and are often embraced. Later onsets with accidents and illness may seem alien to Identity and are usually rejected. All of that is a story for another day. Here, we contend that truly seeing the personhood of an individual with a diminished mind is a great blessing to the person and to their beholder. To believe otherwise is to discount personhood and buy into the intelligism that inflates the merit of intelligence into a false god. The AI rush is a nightmare and a heresy.
After all, AGI will certainly surpass all of us intellectually- you, me, and everyone we know. In relation to AGI, we are all the impaired, the unintelligent masses. We will bristle against this late-onset condition and feel trapped in it and by it. For it was only yesterday that we were ok, and then suddenly, we find ourselves thrown from the top of the heap.
Will AI condemn us? Shall we condemn ourselves? Shall we forget the wisdom that will always distinguish us from an intellectually superior AGI. At our best, we have these hundreds of thousands of years of exertion and learning, yielding personhood’s wisdom, and we desperately need it now. Lao Tzu did not represent wisdom as a function of Large Language Models. In truth, he said The Way was ineffable.
So it is all about to get a lot worse. Unless we can be wise, put intelligence in perspective, and pause AI development before that nightmare takes hold.
“It’s not that I’m so smart, it’s just that I stay with problems longer.” Einstein
Here, the wise physicist who proclaimed the “modest life” was channeling something gritty about mental perseverance. For example, you could persevere through the rest of this essay or maybe just pick up your smart phone and move to the next thing. This is our dilemma today, an utter intolerance for staying with problems, challenges, or really much of anything at all so: click, click, click.
Our irritation at inconvenience is more palpable than ever and instant gratification has never been quite so, well, instant. So instead of staying with a problem, we simply offload it; copy and paste it into ChatGPT or any of its brethren. Cognitive offloading. Instant answers. Everybody is doing it!
Now plenty has been written about the virtues of artificial intelligence for vacation planning, recipe making, foreign language learning and radiology interpreting. A properly programmed chat-bot tutor aids students with patience, precision and accessibility. There are trade-offs. Artificial intelligence directed at cancer cures is as inspiring as those technologies are terrifying when aimed at biological warfare. There is a potential to “solve” environmental problems even as AI Centers require godawful amounts of energy with negative environmental consequences. We will always have problems despite AI enthusiasts and their claims for utopia.
Let’s return to Einstein’s point and staying with problems: “Stay with it!” This is the encouragement from parent to child, teacher to student, and therapist to client. In all three domains growth often comes by staying with challenges and learning from that (sometimes) nettling process. Learning requires exertion and effort, qualities essential to knowledge acquisition honed over tens of thousands of years. Back to offloading. It forecloses opportunities for growth and learning in exchange for quick and easy answers that present a compelling illusion: that artificial intelligence is our own intelligence. We feel smart and competent. But ironically, emerging research shows diminished intellectual abilities with AI, particularly a loss of critical thinking…a world feeling smarter, even as it is getting dumber.
An added and unfortunate twist: cognitive surrender. Arguably more insidious than intentional offloading, we surrender without quite knowing it. We give up our intuitive and deliberative capacities as we adopt AI information. No questions asked. One research study found that people followed wrong AI answers 80% of the time. Unlike the decision to offload, surrender means giving up and giving in–not to our own brainpower but to that of a “machine”. Surrender, as in we lose.
There is something called the automation bias that applies here. We tend to believe machine-generated answers more than information from traditional sources like books or (wise) people. The EdTech revolution of the last four decades saturated classrooms with computers and tablets. And with smart phones on a desk, in a pocket, or a purse. Research shows that just the proximity of a smart phone reduces concentration! Many attribute student achievement declines to these technologies…and to social media (a story for another day). Artificial Intelligence is the latest and greatest tech-threat to our own intelligence. The disruptions to education are already immense. Students are using ChatGPT and similar technologies to write essays, term papers, and complete online exams. They are not learning course subject matter, so-called “learning objectives”, and they are (rapidly) forgetting how to think.
Of course, that means that schools themselves must be at the forefront of caution and regulation, right? Nope. Ohio State University will require an entire course and numerous workshops on the use of AI for its students. California State University has contracted with Open-AI for something ominously called ChatGPT Edu…an oxymoron of sorts. This provides over 450,000 students and 63,000 staffers with a premium version of the platform. For those of us who can remember the aforementioned EdTech Revolution, Apple computer school discounts and giveaways in the 1980’s, this is true déjà vu…the nightmarish kind. Not to be outdone, California Community Colleges will support “human-centered AI grounded in equity, accountability, and student success with faculty, staff, administrators, students, and partners to expand access”. How do we reconcile these lofty sentiments with the deep darkside of offloading and surrender? Cynics ponder how this industry, with billions invested, seems to exert such influence over our colleges and universities.
Some argue for a moratorium on these regressive steps by school administrations. In the style of best practice research, there should be a review of the scientific literature that has already exposed scores of problems and flaws with this technology in educational settings. There should be open and honest meetings with stakeholders-more researchers and especially students and teachers; fewer administrators and industry “experts”.
All of education should pause. No more AI implementation. We need to stay with this problem a little longer.
“…when discovery and exploration and curiosity become your path – then basically, if you follow your heart, you’re going to find that it’s often extremely inconvenient….”
A well-known Buddhist nun wrote those words. A former teacher, her sentiments are essential to understanding modern life and, as it turns out, current challenges in education.
You see, once there was education, quite different from today, with almost no technology (unless you count reel to reel and overhead projectors). No computers; no big screen at the front of a classroom and no screen on every desk. There were just books and pens and paper and, well, education. It was more the path of discovery, exploration and curiosity. But, that process was demanding, so messy, so time consuming and all together inconvenient. So streamliners emerged and promoted personal computers for all. It was the start of the EdTech revolution. Nobody bothered to figure out its effect on education, but it sure was profitable…and convenient.
Here is another example. College students today cannot quite imagine a college lecture without publisher-produced slides. These polished presentations cram data onto a large screen as the class frantically keyboards it all into notes upon their tiny computer screens. Professors mumble on, only partly heard. You see, slides in many undergraduate courses relate to texts and both come from the same corporate publishing house. Text and slide based lectures tend to have little variation and always the same author perspective. This amounts to an unfortunately narrow view of the subject for both teacher and student. But it is terribly convenient!
Of course, some teachers try to address the problem. They hand out copies of the module slide deck for note-taking (but a slide is still a slide). Some, thank goodness, limit the data presented on each slide into a sleek visual like a Ted Talk. Still, for those of us who remember the pre-slide era, nostalgia rules like the once ubiquitous chalkboard (precursor to the whiteboard). Chalk dust, like a tiny cloud of floating ideas, emanated from intense cracks and scratches at the board. Not static like a slide, the chalkboard allowed for spontaneous contributions by students and new ideas from teachers in real time. Erasure and rewrites were common as was a sudden turn from the board to engage the class. Engagement!
This nostalgic view should be tempered by the fact that some classes then as now were not so good, just as then as well as now, some teachers are not so good. As a famous Buddhist teacher once stated “nostalgia for samsara is full of shit”. Samsara cycles birth, death and rebirth fueled by ignorance. This short essay cannot resist a tangential analogy to that publisher based college course birthing, dying and birthing yet again each semester.
Today colleges face a new temptation for ultimate convenience: artificial intelligence. Looking for short cuts, student’s copy and paste questions and prompts into ChatGPT which generates a discussion reply, essay or term paper…instantly. No thinking required! Similarly, online examinations are completed with near perfection and with elapsed time less than would be required to actually read each test item. This practice appeared to emerge during the Pandemic when education was thrust online with little forethought and no guardrails. Using AI in school became standard practice for many otherwise honest students. So convenient!
The practice continues despite persuasive research findings that show AI use results in student “cognitive off-loading” and a diminishment of critical thinking. After all, learning to write is essential to learning to think. Writing is a primary tool for cognitive development. The struggle for clarity and precision of the written word, for the reasoned argument, is the struggle to become a clear thinker, a critical thinker. But becoming a critical thinker is quite inconvenient.
Most professors lamented AI use among students. Some of them resisted with in class writing or sharing drafts toward a final, original submission. But others rationalized their own use of this tech to produce lecture notes, (sadly) more slides and even as a means to grade papers.
Think of it: An AI generated essay scored by an AI generated rubric. Both students and teachers freed from the learning process at long last. Education had become entirely convenient!
What about the overseers? College Administrators were in a quandary. Long ago, most colleges had adopted a business model for education where students were consumers and the customer is, as they say, always right. Investments were made in great auditoriums and stadiums but mostly toward student convenience. There were trigger warnings and lots of handholding. There were interactive white boards, virtual reality simulations and AI software. Convenience sells.
The administrative goal was to fill seats and generate income. A good chunk of that revenue was devoted to hiring more and more Administrators who were not so much on the side of real teaching and not so much on the side of real learning. Most were, understandably, on the side of the college business model. They believed they could not stop AI use so they hired Consultants to rationalize that AI use in college was actually a very good thing.
But learning grows in the muck of inconvenience. “No Mud, no Lotus” as Thay famously said. Consider again bygone days and how we best learn, digging into research and being challenged by contrary ideas; composing and re-composing a paragraph; even a sentence…or the exquisite agony of finding just the right word.
Groundswell movements emerged to challenge the convenience of technology. Phones were banned in many classrooms. Computers too. Some evidence shows that physical books and handwritten notes foster learning so much better than EdTech. Quiet keyboarding gave way to loud conversation and exploration. Let Them Grow movements arose. Children and teens were persuaded to interact with each other and solve problems without a phone or parental interface…a great prep for higher education and life.