pelicanweblogo2010

Mother Pelican
A Journal of Solidarity and Sustainability

Vol. 21, No. 9, September 2025
Luis T. Gutiérrez, Editor
Home Page
Front Page

motherpelicanlogo2012


Some Thoughts on Humanity's Really Big Disease

Bob Este

This is a revised version of an article originally published by
Canadian Association for the Club of Rome, 17 July 2025
© Bob Este, Ph.D.



Kismet, a robot head which was made in the 1990s; it is a machine that can recognize and simulate emotions. Photo by Rama via Wikimedia Commons, CC-BY-SA 3.0.
Click on the image to enlarge.


Humanity has created and contracted a very serious disease of high contagion. There is no vaccine and there is no cure. We are all becoming very ill, and there is no way out.

The disease, writ large, is spreading rapidly. It can be best characterised as the accelerating advance and meta-reinforcement of non-critical thought. No-one is immune and it cannot be avoided. Once you are infected, you’ve got it — it won’t “pass”; you won’t “get better”; it's nothing like “just a touch of the flu”.

Losing our capacities for critical thought changes us from Homo sapiens sapiens to Homo has been. This is our road ahead. This is what the disease is doing to everyone.

Oh — you think I’m being alarmist? That I’m screaming that the sky is falling — you might say: “Oh please, give us a break!” Hmmm … sounds like you think there’s no way such a thing could be happening.

Sorry mate — from what I can see, we’re well on our way, and nobody is exempt.

Allow me to emphasise that the effects of this disease are profound. As we lose our critical capacities, we lose them “for good” — which terminology is the complete opposite of what actually happens with this loss: it’s very much “for bad” — so, yes, we have a memetic denotation problem with the label; this may be a symptom of the disease.

Now, consider the following: if you were blind to this problem, or if you were a truly nefarious type — and if, given the time we have remaining, you could then make this disease work in your interests — you could effect control, through policy and its articulation (and if you’re really nasty, perhaps through variations on the theme of conflict of many varieties, including war) how people acquire criticality, the extent to which that acquisition will occur (if at all), and, in the end, how they will apply what remains of that criticality, in whatever form it finally takes, to whatever circumstances present themselves. In other words, being aware of it or not, you can very much push your lead foot down on the accelerator — the pedal to the metal.

But wait! This is a serious disease! Can’t we slow it down? Stop it? Is there a brake pedal?

Sorry mate ...

So, let’s sharpen our focus onto what has been reported in the China Daily about the control of the study and practicalities of translation and multilingualism. It seems that the Chinese government is in the process of shutting down a wide range of human-based translation and multilingualism education and training programs. Because it’s illustrative, I’ll return to this specific example a few times in this essay.

This is an important development because it reveals one of the “thin edges” of longer-term social engineering that pries apart and disempowers the structures and processes of knowledge acquisition based on human skills of critical investigation and communication. It does not take much to come to the conclusion that the longer-term consequences of this effort show up in the skewing, reduction, and then the eventual elimination of human critical capacity. This is a big deal.

One of the tools that the government is reported to be using to hammer in this “thin edge” is artificial intelligence (AI). The argument for doing so appears to include notions that AI is more efficient and less expensive than the work of humans and is also thought to be at least comparably accurate. Whether such arguments are factual, or not, this intentional shift to AI is a rather brilliant move — especially when seen from the perspective of social engineering. I explicate it here as an example of the disease just mentioned.

I suggest the move works in the following way: if you can ensure that everyone accepts, normalises and becomes dependent on “the work of AI” to delimit the meanings of what others say (across languages in this instance of translation), “normal" human communication skills and channels that allow the clarification, negotiation, and comprehension of meaning are de-emphasised and, in the end, are no longer available to humans in any form that we would recognise today — regardless of how fragile, inaccurate, or subject to manipulation those original channels may have been in the past, before the advent of AI (this canard may be held up as partial justification for reinforcing the new dependency).

By ensuring and reinforcing dependency on AI, all communication (with translation among languages here being the specific example) is mediated by algorithms we and our AI assistants have created and implemented, including what that so-called translation machinery continues to learn (and let us remind ourselves that the machinery is not human, and is not controlled by us — although we might seek comfort in the misapprehension that we are still and will always be in control of it).

This is a very significant point because, as human translation skills are eliminated, and AI becomes the driving force behind what we become accustomed to thinking about as linguistic translation, adaptive human engagement and critical thought atrophy. They are no longer "in the loop" in any significant formative or responsibly effective way. We claim to be using well a new tool assumed to work as we desire, in the same manner as all tools that have preceded it; but, what is the answer to the question about how the tool is shaping, affecting, and using us? Who are we then?

The answer appears as follows: as human criticality atrophies, we become an increasingly passive, small (and perhaps non-essential) component of that machinery. A very rapid consequence occurs: the rich, shared human heuristics of critical human engagement, reflection and reflexivity are vastly reduced and put off to the side — in the end, they are rendered useless. As a result, we — what I will here denote as the "originator species” — still “do” things, but we are no longer in the thinking game. The AI tools we have invented have shown us the way out the back door. We may think we are still in the thinking game and be convinced that this must, of course, always be so; but, I argue that, in the end, essential criticality is no longer. Who are we then?

This is how the disease “works”. Are we aware of it? If the “AI machine” does the lion’s share of what we thought was the work that presented itself to us, but at the same time is "making us stupid” (as has been pointed out for years in the case of PowerPoint, for example; see the work of Edward Tufte, B.J. Fogg, et al), is this what we want? If this is the aforementioned disease, does anyone think this is the deep, long-term result we truly wish to have from the implementation and diffusion of AI penetrating through all realms of humanity?

We can of course talk about the “labour-saving” side of AI. True that AI may be able to find and filter information faster than any human can. AI may also summarise findings as we wish to have them summarised, and in manners we think are correct. AI may be able to accomplish these things and create desired products that we end up thinking are of better quality than we, ourselves, could ever be capable of producing. What’s not to like?

An AI may be able to write its own succinct précis of an article it has written on its own at the original request of a human researcher who asks it to use the writing style of Haldór Laxness to do the job, for example. The result could truly be of interest. AI may “discover” commonalities, logical relationships, themes, or manners of expression that appear creative, original or even insightful. The caveat might then be that the human could direct the AI to add to the introduction to any such material: “No human intelligence has been used to create anything that follows this caveat."

AI may be able to produce such things if it is asked to do so; and, nowadays, it may even generate such things on its own. But what is it that has been accomplished? Human labour may indeed have been “saved” and the product may indeed be useful in many of the ways that some human may have wished, and then some. The utility may be made available through channels that no human had imagined. So, is the “work of AI” about enhancement of human thinking, including what we know of as our own criticality, or is it about something else entirely? Could the “work of AI” have instead as an unanticipated outcome the distortion, reduction, atrophying, and finally the erasing of human criticality entirely?

So let us ask: are we not capable of seeing what we’ve done, and is this part of the growing problem? Is this a “feature” of the disease we’ve created?

Although rapidly becoming ubiquitous, AI is not completely deployed — yet — so, at this time, we have not yet been fully removed from the thinking game (although we have some very interesting evidence that this process is underway — Ugo Bardi's note about the recent “decision”, so-called, to bomb Iran’s nuclear capabilities is a case in point; there are many others). We still seem to be somewhat capable of controlling (or at least partially influencing) the expressions and overall course of the disease — but, to what extent seems very unclear, and whether that influence will make any difference in the long run remains an open question. I don’t mind opining that I am not at all hopeful.

Regardless, for the time being, human players appear to retain some potential to temporarily engage in manipulation of how broad human capacities for critical reflective and reflexive intelligence will, or will not, be built, encouraged, leveraged, deployed, and employed. But it seems we cannot deny that the disease is spreading, having broad impacts, and becoming more powerful all the while. What happens as our critical human capacities are influenced, impacted, shaped, distorted, and one day reduced to insignificance? What then? Who are we then?

Let’s examine the first option of what we could do right now: full speed ahead with a full embrace, with no need to think or do anything about “the disease”; no need to intervene in the above-mentioned processes of AI diffusion — allow “market forces” to do their work. So, allow me to ask: do we think we reliably know, or do we really know anything at all about the evolving consequences of this disease that essentially puts a halt to, erases, and eliminates our own authentic self-referential criticality?

In the face of this disease, isn’t the fact of thinking we are still sufficiently critical, to be sure we know what we’re doing, and that we are confidently in control as we think we ought to be — isn’t that exactly what occurs when humans stop thinking — and especially when that stopping is augmented, and then wilfully and blindly implemented in final fashion by the very disease we have created and of which we are an integral part? Who are we then?

We may not like to think this is what we are doing, or that this sort of denial is occurring so pervasively — and that we, collectively, are becoming less and less a part of what we have created, as we are moved off to the margins and then into obscurity.

I suggest that if we truly want to know if this is happening, all we have to do is look around. Looking in the mirror may also be helpful. The consequences of all our unfolding polycrises that we already understand (to the extent that we are able), as well as those we do not (which are legion), are spreading out before us, around us, alongside us, within us, and through us. They ARE us. This seems plain enough: the polycrises are expressions of this same disease.

So: all of this is happening and continues to happen — and not apart from us, or somehow separate from us — at the very moment of my writing this essay, at the moment of your reading it. Do we understand anything at all about how and where this proliferation is unfolding, so much of which we see in such blindered fashion, and where these elements and processes that we can in fact perceive will almost certainly go? Is this what we want?

Let us return to where we began in this essay, to the example of systematic erasure and elimination of multilingualism and translation. Setting things in motion to restrict, limit or even eliminate on-board “human” language skills is only a small part of what I am describing — only one small step along the way to eventually (and quickly, I would add) being non-human. We are quickly turning into Homo coulda-woulda-shoulda, but it’s happening so quickly we can’t even feel it. Limiting and then erasing “human” language skills (by way of translation, in any venue, as just one example used in this essay) has the obvious outcome of putting in motion the machinery to reduce and eventually erase human critical capacities.

Without our normal linguistic and cognitive experiences that allow us to deal with, embrace, learn from, and move forward with uncertainty, depth, breadth, or flexibility of language(s) coupled with the pleasure of discovery, without the power of insight by being right and by being wrong (or both, or not yet knowing either), without knowing the extent and nature and depth and breadth of our learning and meta-learning — any remaining criticality generated (as it is in “normal” circumstances) through abductive reasoning — all this becomes increasingly limited if not outright impossible. Who are we if this takes place? Again — is this what we want?

We here arrive at what I take to be the most important point: restricting, limiting and erasing human criticality appears to me as a very serious, clear and present danger. Reducing our overall on-board human criticality through any means whatsoever is the least desirable outcome of anything we humans might undertake.

This reduction is the pernicious disease we have created and are spreading as it grows. The elimination of multilingualism and all aspects of translational knowledge and skills, coupled with the above-mentioned diffusion of ersatz-translational “work” of AI — although undeniably containing nuggets of limited positive enhancement (such as aiding in human search for new possibilities, etc.) — does nothing, from a larger, longer-term, systematic intelligence perspective, to avoid or mitigate this danger.

Going in the direction of disease proliferation, of capacity reduction and eventual erasure of criticality, is a risk that humanity simply cannot afford to take.

Why, then, would anyone wish to take this course, to spread the disease, to ensure there is no resistance possible, to reduce and eventually erase human criticality?

I here claim that what I have outlined is undeniably the case — especially against the backdrop of so many other serious and increasingly urgent survival issues arising as quickly as they are. Again, I take the liberty of reminding ourselves of our “hot mess” consisting of so many interrelated emerging polycrises.

It is my view that the processes and consequences of this unfolding are powerfully self-reinforcing — they accelerate and obscure themselves, rendering themselves and their combinations less understandable and therefore more opaque (out of sight is out of mind, is it not?). This renders the unfolding disease increasingly unknowable (and far less meta-knowable) and thus, in the end, permanently forgettable. I suspect we will therefore forget our forgetting ...

Not to put too fine a point on it: this is the beginning of a VERY bad day. Indeed, I would suggest it is fatal.

My biggest fear with regard to the time we have remaining is that this move is guaranteed to only be a one-way street. Once underway, there is no going back.

The focused point being made here: the fact of AI being able to carry out the “work” of technical translation among languages (as just one example of such “AI work") has nothing to do with sharpening or enhancing what inheres in human criticality. I argue that we humans need to do such work ourselves — with the caveat that we should of course do so with useful augmentation that doesn’t follow an infectious disease vector — in order to maintain and grow our human criticality. We just have to know the difference between the two.

As an aside: I suspect that AI being able to carry out the “work” of technical translation among/across languages could likely accelerate the rate of translation by perhaps zeroing in on the most common patterns of word usage revealed through deep analytics of translation training sets, and through such analytics, perhaps uncover novel ranges of options for translation that conceivably would not be commonly accessible to human translators. This could conceivably be useful and would be akin to what “Watson” (out of IBM) was able to do — to advance reasonably accurate and plausibly helpful medical disease diagnostic assistance — essentially, to explore and provide more options for a human medical practitioner to consider, options that the practitioner might not have had the time or personal knowledge or experience to hold up to the light. But, it should be very clear that such AI “work” is disconnected from and not an integrated heuristic part of the autonomous, normal growth and application of human criticality. Watson could not ever be given the responsibilityfor a diagnosis.

Back to the language example. I would argue that the deep distortions and even the discontinuations of human learning in the exploration and “production” of multilingual translation when it is “handed over” to AI is a good illustration of what may very well be an unanticipated outcome of the kind of “work”, so-called, that AI can do (and do well, in relative terms, if restricted to provision of what are thought, by humans, to be relatively trustworthy options) that does not improve but REDUCES human criticality — and all that comes with such undesirable reduction.

The argument that the work of AI in such an instance increases the number of options, and / or perhaps somehow “enriches” the terrain of options that a human translator might entertain when conducting linguistic translation might therefore have some truth to it; however, an expression of the aforementioned disease is that some of us may think (and in so thinking, demonstrate that we think poorly) that all of this is good and desirable, that this is all that we need to know, and that we should just get on with it. IMHO, this is not a good way of understanding this disease, or our predicament.

If anyone thinks that the aforementioned line of “acceptance” is adequate or even desirable in face of the disease now spreading across the world, that way of thinking is IMHO a very serious symptom of the disease itself. IMHO we need to be honest about the possibility that all is not as rosy, as inviting, or as moderately hopeful as we are so easily led to believe, or want to believe. That is: powerful (and unknown) unintended negative outcomes may be (and indeed appear to be) at play and, I would suggest, they are emerging before us; I am very sure this possibility (for which the probability is NOT zero) requires some very careful reflective and reflexive thinking — well before anyone “gets on the bandwagon” for making policy decisions about whether translation programs (as just one example) should be eliminated, and if so, which ones.

As I have intimated: this is the thin edge of the wedge.

As one of the humans in the loop who is still standing — and please note that if, as I stand here, I don’t have to think about how other humans think about meaning (of the words we use in every context as we use them) because I have come to think that the AI will do this for me (which is quite obviously not really thinking at all) — I will, by default, NOT be able to communicate as a full and complete human. I will only be able to communicate, non-critically, in accord with the ways that the AI determines.

This might be fine if the only things I am interested in are those things that the AI (on account of what it is programmed to do) has predetermined to be “workable” in terms of translation. However, let us be clear: what the AI — on account of what it is programmed to do, and how it continues to program itself accordingly — determines is “workable” and “doable” does not, and logically CANNOT encompass any of the richness or subtleties OR THE EXCEPTIONALLY VALUABLE UNKNOWNS of what human communication and therefore human translation logically MUST deal with — which takes what I will here denote as OPEN HEURISTIC CRITICALITY.

Human communication is not primarily algorithmic — although as we know, it has algorithmic components. Our communication is primarily comprised of a forever-dynamical adaptive heuristic blend of extremely complex communication elements. Saying that this dynamism is not primarily algorithmic logically means that no “program” or “formula” of human communication can ever be written or discovered to describe human communication: to use the vernacular, authentically human communication, although built on some knowable structures and patterns, does not compute. It is "an emergent”. Communication qua communication is neither right nor wrong — communication “is”.

My point here is that programmed algorithmic communication (such as what would be generated by AI translational equivalencies which are non-human) does not have meaning in the sense of human heuristically-derived meaning; only the processes of emergence which are heuristically human (and which may indeed include algorithmic elements) have meaning — to, and for, humans. With the erosion and eventual elimination of human criticality, if what we communicate ends up having no meaning, what then? Who are we then?

The complex adaptive elements of human communication that carry authentic meaning certainly DO include some algorithmic components (e.g., the rules of grammar, syntax, analogical reasoning, commonly held definitions and denotations of terms, etc.; it could be argued these are indeed essential for overall structures and processes of communication) — BUT the algorithmic components do NOT carry or define what we humans mean when we employ what can be recognised as algorithmic components in order to communicate — or, especially, the vast numbers of things we can develop (including meta-meaning) from human communications.

The very high value of human adaptive meta-critical thinking skills and abilities is built, enhanced and maintained through reflective and reflexive critical thought itself. Human language translation is one of the best ways that we can acquire, learn about, apply, modify, succeed and fail and experiment with, explore, refine, benefit from, and then generalise our skills and abilities of overarching criticality. For example, accurately and meaningfully understanding SOME of what others actually MEAN when they communicate puts us in the position of having to be open to a vast range of possibilities as well as being aware of the very high value of our own incompleteness and uncertainty.

You may recall the old aphorism, “The hurridier I go, the behinder I get.” This aphorism continues to provide a very nice window through which we can still peer into the disease we have created, contracted, and are now spreading so well, so effectively, and with such finality. We are all suffering from this, from our own disease.

Fortunately, in the time remaining, we can still see what we have done (if we choose to look). We have plenty of evidence that ever since we were bitten by our own self-created and self-imposed bug, we’ve done little to nothing but blindly rush as fast as we possibly can down the overshoot road of greed and increasingly non-critical self-destruction. My general sense is the following: we may not wish to continue with this course of action.

An AI might, one day, be developed to provide what we think of as reliable evidence that it does comprehend the full palette of human language (or anything else, for that matter) just as humans do — and can, in turn, create and share with us what we humans take to be “meaning” that can be reliably communicated through language. If such evidence does appear one day, is that all we need to know about AI “capacities” to understand “meaning” (through language, for example)? Someone might one day then suggest that AI “understands” what it means for some “thing” to have what we take to be “meaning”.

If and when that ever happens, would that then mean that AI would have emergent properties of what we have come to think of as "general intelligence”? Would that “general intelligence” agree that we humans possess anything remotely akin to what it seems to possess, and to what we humans think we possess? Who, then, is this “general intelligence” (if that is the proper way to denote and think about it)? Who are we then?

As we approach the end of this essay, I’m sure you get my drift regarding the disease that we have. We’re doing a spectacularly good job of reducing and even eliminating our own on-board human capacities for human adaptive meta-critical thinking skills, both with and without AI. That’s what our disease does. You can even think about it as follows: that’s what the disease is for. Where does that leave us as a species? Who are we then?

Earlier in this essay I mentioned "the first option of what we could do”. Although I’ve just said that we may not wish to continue on that course, I don’t think there is a second option. This puts our species in a difficult position.

Thus, personally, I am not hopeful about where this story seems to end. We're well on the way to the conclusion of our own self-fulfilling prophecy that the journey of Homo sapiens sapiens has very nearly run its course. Perhaps it’s best that we leave what we think is our legacy to the AI that we are still shaping, and — to the extent that we can — helping to shape itself, as we follow the inevitable one-way path of leaving the game.

Will this signify and end up meaning anything to AI? Would that mean anything to us? Will the AI know, or care?

As I’ve said at the conclusion of other essays parallel to this one, it probably doesn’t matter very much, one way or the other.

References and Suggested Readings

Al-Sibai, Noor (2025) Top AI Researchers Concerned They’re Losing the Ability to Understand What They’ve Created. Futurism (retrieved Wed 16 July 2025).

Bardi, Ugo (2025) Is AI rewriting the rules of war?. The Seneca Effect (retrieved 16 July 2025).

Foust, Jeremy L. (2025) Why we choose to avoid information that is right in front of us. Psyche Magazine (retrieved Tues 15 July 2025).>

O’Rourke, Meghan (2025) The Seductions of A.I. for the Writer’s Mind. Guest Essay (Opinion), The New York Times (retrieved Fri 18 July 2025).

Shou, Zouaves (2025) English majors face uncertain future as AI replaces basic skills. China Daily (retrieved Tues 15 July 2025).

Smith, Andrew (2015) How Powerpoint is Killing Critical Thought. The Guardian (Opinion, Technology) (retrieved Wed 16 July 2025).

Tufte, Edward R. (2003) The Cognitive Style of Powerpoint (retrieved Wed 16 July 2025).

Vemeer, Michael J. D. (2025) Could AI Really Kill Off Humans? RAND Commentary (retrieved Fri 18 July 2025).

Social Construction of Technology. Wikipedia (retrieved Tues 15 July 2025).

Technology and Society. Wikipedia (retrieved Tues 15 July 2025).

Persuasive Technology. Wikipedia (retrieved Tues 15 July 2025).

Captology. Wikipedia (retrieved Tues 15 July 2025).

Haldór Laxness. Wikipedia (retrieved 15 July 2025).

Is Google Making Us Stupid? Wikipedia (retrieved Tues 15 July 2025).

The Law of Instrument. Wikipedia (retrieved Tues 15 July 2025.

Abductive Reasoning. Wikipedia (retrieved Wed 16 July 2025).

IBM Watson. Wikipedia (retrieved Wed 16 July 2025).

Note

To the best of his knowledge and in accord with his ongoing vigilance, and for what it’s worth according to any known measure, the author claims that only his own natural intelligence has been used to create this essay. The author therefore warrants, to the extent that he can claim to know, that no AI has been used in any known or unknown way, either intentionally, by accident, or through any other means of incursion into the computing equipment used to create this essay, nor into the mind of the author to actually create and/or write the entirety or any part of this essay. Any technical of factual errors contained in this essay are entirely the responsibility of the author.


ABOUT THE AUTHOR

Bob Este earned his Ph.D at the University of Calgary, Alberta, Canada, exploring architectures of innovation and the philosophy of science, focusing on the metaphysics of denial. He is currently a member of the Fusion Energy Council of Canada where he is using his background in innovation studies and large project management to provide development advice in the Canadian fusion realm. He is also an Associate member of the Canadian Society of Senior Engineers. Bob is an active contributing member of the Canadian Association for the Club of Rome, serves as a peer reviewer for Interchange, a major Canadian Education Journal, and is a Fellow of The Center for Innovation Studies in Calgary.


|Back to Title|

LINK TO THE CURRENT ISSUE          LINK TO THE HOME PAGE

"We make the future different by
making the present different."


Peter Maurin (1877-1949)

GROUP COMMANDS AND WEBSITES

Write to the Editor
Send email to Subscribe
Send email to Unsubscribe
Link to the Group Website
Link to the Home Page

CREATIVE
COMMONS
LICENSE
Creative Commons License
ISSN 2165-9672

Page 12