No Machine Can Threaten the Primordial Fabric of Our Existence

Anders Bolling
9 min readMay 28, 2023

Is A.I. merely the next layer in an already ”artificial” reality?

Illustration: Canva

Everybody and their aunt takes on the A.I. issue these days. I don’t know if I am everybody or their aunt, but here is my contribution, anyway.

I will examine the phenomenon from a more philosophical-spiritual viewpoint than most do. I will not dwell in detail on neither the wonderful nor the scary things A.I. can do, and I will not land in any clear-cut conclusion about the pros and cons.

I have touched on the topic before, briefly: I made a couple of short videos, where I claimed that we probably don’t need to worry that machines will take over any time soon, considering how lousily computers still perform in trying to predict the weather, let alone the climate or seismic events.

The same goes for the human body, by the way. A medical doctor, no matter how skilled, doesn’t have a clue as to where my next physical ailment will appear. Will robots? I have my doubts.

Both Gaia and the human body are vastly complex life forms. We have barely begun to understand how they work, let alone detect the underlying powers. Main reason: we are looking in the wrong place. We are only studying the physical expressions of life.

I have followed parts of the ongoing A.I. conversation. Physicist and cosmologist Max Tegmark is worried, as is historian Yuval Noah Harari and cognitive psychologist and computer scientist Geoffrey Hinton. Kevin Kelly, futurist, writer and founder of Wired magazine, is not worried (if anything, he is enthusiastic). Cognitive psychologist Steven Pinker takes it cool, as always, and transformational teacher and MD Deepak Chopra is also not particularly worried (for reasons that resonate with me).

Few debaters (Chopra is one of the exceptions) explore the deep philosophical basis for taking either stance.

A.I. stands for Artificial Intelligence, not Artificial Consciousness. I am convinced it will never be possible to develop the latter, because consciousness is primary, non-physical and essentially indivisible (which probably needs some explanation, but that is for another essay).

The primordial fabric of our existence cannot be threatened by any machines. But that doesn’t mean that the emergence of A.I will not challenge various physical expressions of humanity.

These machines will always be non-human in some obvious way

If consciousness is primary and eternal, and if the essential core of a human being is an aspect of that consciousness, thus, by definition, something non-physical — a soul, let’s say — these machines will always be non-human in some obvious way, such as not being able to convey detectably heart-based emotions or to express unconditional love.

An alert human being will thus always be able to tell where humanness resides and not, and the line between human and machine will never get completely blurred, no matter how many psychological and social problems the development of A.I. causes.

Let me quote Deepak Chopra here:

A.I. reflects the level of awareness of its users. A.I. has no level of awareness since it isn’t conscious. It can absorb, shuffle, combine, and recombine data (information) in fantastic ways, but human awareness is infinitely more than data and information. Indeed, “information” is a concept that had no reality until the human mind created it.

In a talk about the dangers of A.I, Yuval Noah Harari describes ancient worldviews like Buddhism and Plato’s cave allegory as cultural stories, not as actual insights into the human predicament. He also states that films like The Terminator and Matrix are based on our fear of A.I., not that they are allegories for said predicament.

Harari also, along with Hinton and many others, makes a strong point of the possibility that robots will soon be so good at mimicking human behavior that we will mistake them for humans and be fooled to do as they tell us.

But if you think about it, for how long have we run the risk of letting ourselves be deceived by other humans who are selfish, perhaps even psychopathic, and who want to take advantage of us, to suck our energy out? Well, always.

The ability to listen inward, to listen to one’s heart, when making pivotal life decisions, such as who to trust and who not to trust, is a proficiency we constantly need to train. Consider the many dictators in history who have been able to manipulate millions of people to think and act in ways they probably would not have if they had listened to their inner voice. Would it have been worse if they had followed a machine instead of a mouthy caudillo?

As it happens, we have already experienced being outsmarted by A.I

It is of course possible that machines will become more efficient than heartless humans at both planning and executing destruction. But that will not happen if they are not instructed to do so. Remember, we have had the ability to wipe out our own civilization by instructing (pressing the button) nuclear weapons to go off for at least fifty years, but we are still here.

But if they eventually instruct themselves? Well, if there is a possibility that intelligent machines will outsmart humans and subsequently enslave us, there ought to be an equally large possibility that those machines will outsmart humans and subsequently create world peace, say, or make friendly contact with benevolent alien civilizations.

As it happens, we have already experienced being outsmarted by A.I without detrimental consequences. Computers became better than humans at playing chess more than twenty years ago. What happened after that? Humans started playing chess with each other like never before. It is quite possible that an expansion of A.I.-generated content will boost the demand for genuine human-generated content. This is probably what will happen with art and literature.

Photo: Canva

Now, let us take a look at the issue from a non-spiritual perspective:

If you are a materialist, a.k.a. a physicalist, you see reality as fundamentally physical, which by extension means that humans are flesh robots, essentially. You then arguably believe that consciousness — to the extent you think consciousness is something meaningful to contemplate — is some kind of emergent property in advanced life forms produced by the brain, i.e. produced by matter. You then presumably also believe that robots should be able to emulate humans enough that they, too, eventually can develop consciousness.

Once you have landed there, what would be the problem withA.I., really?

If you have a materialist worldview and still worry about A.I., you must explain what it is that is so specifically human that we cannot let machines copy it. And where does that specific humanness come from? If a human is basically a biological robot, what is wrong with letting artificial intelligence develop biological robots like us, but with fewer (or no) flaws?

I don’t get the logic there.

Only the transhumanist movement is consistent about this. They welcome the robotization of humanity. Eerie, if you ask me, but consistent.

Illustration: Canva

From a spiritual point of view, the difference is of course fundamental between what emerges from matter (like a machine) and what emerges from pure consciousness (like a human being).

In my view, human consciousness is expanding, and no machines are needed in the process. We (some) have just barely begun to understand how powerful we really are. I am convinced that our extremely under-harnessed DNA carries abilities that this species, or a species we at least in part stem from, once had, but then forgot.

I will give you a couple of examples:

• If computers begin to master the spoken and the written word better than humans, then let us bypass physical language and re-learn our latent ability to communicate telepathically.

This is not even particularly woo-woo. Plenty of scientific articles and peer reviewed papers show evidence of intermediation of thoughts, intentions or images between people without any technical or physical means, sometimes over long distances.

• If robot doctors become better than human doctors at detecting pathological changes in the human body and pinpointing pertinent therapies, then let us embrace and develop the self-healing ability we already have within us but which has been suppressed, or at least neglected.

Western medicine reluctantly acknowledges our natural immune system, but when it comes to severe illnesses, no standard doctor will pay any attention to the skills of the patient’s own body. But again, mainstream science actually encompasses what can be seen as proof of advanced self-healing, particularly in the form of spontaneous regression of cancer.

Artificial intelligence should be no match for a human who fully utilizes her capabilities — non-physical consciousness in beautiful concert with the physical body.

Thus, we are more powerful and more free than we realize — or than we remember, rather. I don’t think it is by chance that movies like The Matrix and Truman Show become blockbusters. These stories remind us of an uncanny inkling many of us bear since the day in early childhood the physical world got a grip on our attention, an inkling that we are walking around in an illusion. Or a simulation. Or a dream. Pick what resonates most with you.

I believe this underlying suspicion has accompanied Homo Sapiens since the dawn of history, and I suspect that the fear that A.I. might take control over humanity is the latest version of this continuous unconscious monition.

As I mentioned, Yuval Noah Harari talks about The Matrix as if it were only about the story on the surface — our fear of being enslaved by machines — but subconsciously he may very well sense that the movie is actually more documentary than fiction.

Considering all of this, perhaps the concern around A.I. is a tad redundant

Isn’t the notion that this is not the base reality the implicit message from Buddha, Jesus, Plato, every religion, every ascended master, every mystic, every spiritual teacher and every spiritually oriented philosopher? And, by the way, from the non-religious philosophers of today that debate whether we are living in a simulation or not. And even from quantum physicists.

They all (except perhaps the quantum physicists) speak about the need to wake up. Why? Because stubbornly continuing to believe in what we are taught is real amounts to staying in a dream.

Now, how one defines and describes the matrix / filter / simulation / dream world varies greatly, of course. But the concept is basically the same: there is a more real reality than this one, and we are stuck in a persistent belief that this physical existence, and all the societal and cultural ideas we attach to it, is all there is because of oblivion / manipulation / a fall from grace with God.

Considering all of this, perhaps the concern around A.I. is a tad redundant, or at least overemphasized.

Illustration: Canva

On a more down-to-earth note, it is pretty clear that we have gradually increased our power to simulate new realities in the material world. The physical matrix we are immersed in has become ever more artificial. It is a plethora of simulations, as Tim Andersen points out in a thoughtful essay.

We have created

  • artificial rhythms and cycles (clock time)
  • artificial environment (houses, clothes, roads, parks, cities)
  • artificial beasts of burden (cars, planes, elevators)
  • artificial strength (spectacles, combine harvesters, power tools)
  • artificial human communication and interaction (magazines, books, tv, film, internet)
  • artificial thinking (computers)

And not only that. Organizations, authorities, religious structures, nation states and money systems are persistent mental constructs that most of us believe actually exist as such. Some believe in them so fervently that they are prepared to die for them. How is that for a simulation that has managed to take over our minds? A.I. couldn’t do it more efficiently.

So, it is fair to say that the artificial intelligence we are debating today is being developed within an already massively artificial context.

In sum: The human essence — consciousness itself — can never be threatened, not by an A.I. that runs amok, and not by anything else. But of course, within the confines of the overarching illusion, i.e. the physical world, a number of problems could arise, as with every human invention.

I suppose it is possible that our inner evolution, the expansion of consciousness, could be hampered and/or delayed if we let human-like machines run more of the show. That would be a concern. But it doesn’t have to go that way.

It is ultimately up to us to decide whether we should let robots dominate us or not.

***

If you like this piece, please check out my other essays on Medium

I have a podcast and a Youtube channel called Mind the Shift

I also have a website

--

--

Anders Bolling

Recovering news journalist with deep interest in society, science, spirituality & how they merge. Communicate and bridge. Podcast, text, talk. andersbolling.com