Are There Any True Historical Precedents to Our Current AI Fears?
Source: Immo Wegmann on Unsplash
I find myself using LLMs a lot less now than I was a year ago or even six months ago. For a time, I used them extensively to come up with first drafts of things as well as for brainstorming and research. I still use it as a research tool, but I no longer do first drafts in AI. I came to the conclusion that I was losing something precious by doing so and that my mind was weakening as a result. These days I write everything the old-fashioned way, and the writing has been flowing in a way it hadn’t been for some time.
There seem to be good reasons for fearing that AI risks making us stupider. While studies of the effects of AI on our cognitive skills are in their infancy, according to an article in the BBC this week, a growing body of research suggests that “cognitive offloading” to AI can have a detrimental effect on our mental capabilities. A study at MIT Media Lab divided a group of students into three groups–one using ChatGPT, one using Google search with AI-generated summaries turned off, and one with no technology–and assigned them short essays. Both the zero-tech and search engine groups showed strong brain activity, while the ChatGPT group showed a 55% reduction thereof.
Not surprisingly, the students from the ChatGPT group showed themselves to be incapable of quoting from their essays, and several expressed that they felt no ownership over the work. Other studies have shown that people become less able to retain and recall information when they use AI tools. University of Pennsylvania researchers have described what’s going on as “cognitive surrender,” meaning that people are increasingly accepting of what AI tells them and even allowing it to override their own intuitions.
All told, there seem to be lots of reasons to fear what AI is doing to our brains. Still, this isn’t the first time we’ve been through a major technological paradigm shift that prompted fears of intellectual erosion. While this moment does feel different in many important ways, it’s not entirely without precedence.
The Invention of Writing
Writing was invented in four different places independently: in Mesopotamia somewhere between 3400 and 3100 BCE, in Egypt around 3250 BCE, in China sometime before 1250 BCE, and in Mesoamerica (modern-day Mexico) sometime before 1 CE. Many of the world’s ancient cultures, such as archaic Greece and India, were oral cultures that later adopted writing. And while writing made record-keeping and the transmission of teaching and culture much easier, oral cultures have still birthed major cornerstones of learning and literature, from the early Vedic texts of Hinduism to Homer’s Iliad and Odyssey.
While some traditions (notably Hinduism and Islam) still place great emphasis on memorization and oral transmission, the modern world is for the most part entirely dependent on the written word when it comes to communicating ideas. The downside to the invention of writing–the weakening of memory–was not lost on the Greek philosopher Socrates (who never wrote anything down), who singled out writing for weakening the necessity and power of memory, and for allowing the pretense of understanding, rather than true understanding, as quoted in Plato’s dialogue Phaedrus.
“This invention,” he allegedly said, “will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.”
Socrates would appear to have been on to something. Modern studies of the brain show that the act of repetition and memorization strengthens neural pathways, improving memory retention and recall over time. Rote memorization has also been shown to improve neural plasticity as well as focus. From my own experience performing in both French and Japanese-language drama troupes and spending many hours memorizing lines, I can attest to the power of memorization for language learning–absorbing bits of language that way gained me greater fluency than any other single activity.
It’s worth remembering, of course, that the only reason we know what Socrates had to say on this subject is that Plato wrote it down. Still, while it’s a stretch to think that the invention of writing made us dumber (as opposed to freeing our minds for other tasks), the extraordinary feats of memorization that were once commonplace in preliterate societies do attest to a level of cognitive functioning that is rare nowadays. We might all benefit from spending more time memorizing poetry, sacred texts, plays, and so on.
The Printing Press
The invention of the printing press by Johannes Gutenberg in the mid-15th century is the development most often likened to the AI revolution. As with AI, the printing press provoked an economic and professional backlash by scribes, monks, and other copyists who (legitimately) feared loss of livelihood. In certain instances, members of Scribes’ Guilds actually destroyed printing presses and chased book merchants out of towns. Meanwhile, religious leaders worried out loud that mass-produced books would spread unapproved or heretical ideas, undermining the church’s control over scripture, and responded with censorship and even book burning.
Some thinkers of the time feared that the printing press would erode intellectual rigour. Others foresaw a world of information overwhelm—a situation all too real in today’s digital age. What nobody in the 15th century foresaw, though, was a (very) gradual eclipsing of the art of handwriting, which numerous studies have shown is good for our brains. Multiple studies have shown that writing by hand significantly improves memory retention compared to typing, while the motor engagement involved in handwriting strengthens neural connections, making the human brain more efficient at processing motor-related tasks over time.
Still, from the standpoint of mass literacy and the democratization of information, I don’t think anyone today would argue that the printing press was a step in the wrong direction. Moreover, journalling by hand is still popular and clearly beneficial to our mental wellbeing. And I don’t see that going away anytime soon.
The Calculator
The world’s first fully electronic calculator was unveiled in 1961. Pocket-sized devices began hitting the market in the 1970s, and by the end of that decade, calculators had dropped in price to such an extent that it became a standard school supply. Nowadays the pocket calculator has largely been eclipsed by smartphone-based calculators, although the lack of distractions on a traditional pocket calculator still makes it an appealing tool for many.
Calculator use by children in schools remains a matter of debate. Many have argued—and continue to argue—that calculators can cause core mathematical skills to atrophy, or that their use can prevent understanding of advanced algebraic concepts. As recently as 2011, the UK’s Minister of State for Schools expressed a concern that children’s mental and written arithmetic was suffering because of reliance on calculators if they use them at too young an age. Others have countered that while, yes, certain skills do atrophy, education has adapted, with the emphasis shifting toward conceptual understanding and higher-order problem solving.
I suspect that today’s math educators have, for the most part, struck the right balance with the electronic calculator, not introducing it so early as to inhibit the learning of basic arithmetic while not depriving students of an invaluable tool for higher level mathematics. Interestingly, I still find myself doing addition and subtraction the old-fashioned way with pen and paper, even though a calculator would be at least somewhat faster. It’s probably early childhood programming. That and a desire to retain my basic arithmetic skills.
The Internet and Search Engines
The search engine actually predates the invention of the World Wide Web in 1989. The concept of an electronic information retrieval system was first described by Vannevar Bush in 1945, while the first actual system of this kind, WHOIS, was created in the 1970s and was first put to use in the early 1980s by ARPANET, the predecessor to the modern internet. The first “all text” crawler-based search engine was WebCrawler, which launched in 1994, while Yahoo! Search, the first popular search engine, debuted in 1995. In 1998, Google revolutionized the whole field by introducing the idea of selling search terms, making search engines big business.
Concerns about the potential for electronic media to supplant literary reading have been around since the dawn of the internet. The rise of Google in particular prompted criticism alleging that people would cease to remember facts, that attention spans would shrink, and that people’s capacity for so-called “deep reading” would decline. In a landmark 2008 article in The Atlantic entitled “Is Google Making Us Stupid,” technology author Nicholas Carr asserts that Google and other knowledge-finding technologies may speed up existing human computational processes, but at the cost of foreclosing the human potential to easily create new knowledge—an allegation that presaged modern-day criticism of LLMs.
The jury in large part remains out on whether the internet and search engines have a deleterious effect on our cognitive abilities. A 2021 paper entitled “Information without knowledge: the effects of Internet search on learning” backs up proponents of the so-called ‘Google Effect’, namely that internet searches reduce the likelihood of information being stored in memory due to the ease of information retrieval that the internet provides. Others, including Albert Einstein in 1921, argue that too much is made of the importance of memorizing facts. “The value of an education in a liberal arts college,” he asserts, “is not the learning of many facts but the training of the mind to think something that cannot be learned from textbooks.” Or search engines.
I’m just old enough to have had to do research the old-fashioned way as a graduate student—in cramped archives, cranking through microfiche—and I’m thankful that I learned to conduct research in this way. I really feel like I had to work for that degree. Still, I don’t feel that my intellect has been at all compromised by today’s internet; it’s simply made retrieving information a whole lot easier. At the same time, though, when I really want to immerse myself in a subject matter, I still reach for a physical book. It’s well attested that we process information differently this way, and I for one definitely notice the difference. I’m thankful to have both options.
Why AI Feels Different
When it comes to the big technological changes humanity has faced, history teaches us that people will fret about them at first but then just get on with life, with the new development simply becoming something mundane that we take for granted. In all the above-mentioned examples, technological paradigm shifts have presented us with cognitive trade-offs wherein, in exchange for one set of skills potentially atrophying, more of the human mind has been freed up for different and perhaps more edifying cognitive tasks. The improvements in quality of life that we’ve witnessed over the course of these developments is also hard to argue against.
AI feels different in a number of respects. Most importantly, there appears to be no cognitive trade-off in the equation. By allowing humans to offload not only the research but also the creative thinking and the writing process, what exactly is there left for the human being to do? Sure, it enables us to do more of whatever it is we’re tasking the AI to do, but at what cost? And does the world really benefit from us all simply being able to crank out more content? We’re already being inundated with more content than any of us could possibly absorb—exactly the overwhelm that some 15th century printing press critics feared.
There’s really no human cognitive task that a sufficiently advanced AI system can’t at least simulate, and this is the main difference between this technological paradigm shift and those of the past. The speed at which this technology is advancing is also unprecedented. Writing took millennia to become universal. The printing press took several centuries. An interconnected world allowed pocket calculators and search engines to take off quickly, but in both cases their evolution has been relatively slow. It remains to be seen how fast AI will evolve over the next decade, but the pace of its progress thus far has been unlike anything humanity has ever seen before.
I do worry that unless we as a society set limits on what we task AI with doing, we are going to see a wholesale erosion of our cognitive abilities and our creativity that is quite unlike anything we’ve seen in the past. I don’t think any of us are immune to the pull of intellectual laziness; I know I’m not, as I witnessed in my increased dependence on AI as a writing tool before I mostly stopped using it as a first draft generator. I think there are real reasons to fear what unchecked AI might do to our brains and, as a consequence, to our sense of wellbeing. An under-exercised brain is a surefire path to lassitude and depression.
I suppose there are things we could do with our minds even if we did end up outsourcing all our cognitive and creative tasks to AI. We could, for example, all become Buddhist monks and spend twelve hours a day in a state of deep meditation. We could all become masters at introspection—with or without the help of psychedelics—or spend our days communing with the natural world à la St. Francis of Assisi. We could also all become tantric sex experts and devote ourselves to marathon lovemaking sessions. Conscious experience—the fact that there’s something it’s like to be you—is perhaps the one thing that could never be outsourced to AI.
Personally, though, I’m not ready to surrender my intellectual faculties to a machine, even if it could theoretically accomplish any intellectual task better than I could (which it certainly can’t at this point). Moreover, I’m beginning to wonder if all the hype around AI galivanting towards artificial general intelligence (AGI) may be—just that—hype. LLMs still hallucinate all the time. The content they produce is paper-thin. ChatGPT can’t even correct a spelling mistake in an image it generates, at least not without creating yet another egregious error. Three years ago, I was duly impressed by what ChatGPT could do. These days…not so much.
Yet for all their shortcomings, the AI chatbots of today are still enough of a simulacrum for the human intellect to crack open the door to an encroachment on human cognition that is detrimental to us all. I’m not suggesting we put a stop to AI advancement—I want that still-elusive cure for cancer as much as anyone else, and if a machine can get it for us, that’s great. But AI is, I believe, different from previous human innovations in that it threatens to subsume the entirety of human cognitive activity and creativity and hollow out our intellects.
Perhaps I’m wrong. Maybe AI is no different from any of the technological innovations of the past, and that we’ll be all the better off for it while retaining our intellectual fitness. In the meantime, I’m going to continue to write my first drafts manually until someone pries the keyboard from my cold, dead hands. It’s what I need to do to remain intellectually fit.