Recently, I had some insightful conversation with friends about artificial intelligence (AI). My friend Pablo coined an enlightening term: “statistical reactor” (SR). This isn't just semantics; it's a vital shift in perspective, especially for those who have superstitious ideas about how the technology really works.
The term "statistical reactor" rings true, especially when you consider ChatGPT's own admission: it’s a better descriptor for AI than the grandiose 'artificial intelligence'. Why? Because it underscores AI's true nature: a data-processing powerhouse, not a sentient being.
The deification of AI is the elephant in the room in AI safety debates.
Marc Andreessen hit the nail on the head, equating AI doomerism to a millenarian apocalypse cult. Given Marc’s interest in mimetic theory, I’m not surprised that he came to the same conclusion I did.
AI doomerism is nothing more than cope regarding one’s personal existential anxieties. When people are panicked, they seek a scapegoat to sacrifice, and eventually the scapegoat becomes a god. This is precisely what these doomers did to AI.
This apocalypse cult pattern is blatant, yet astonishingly overlooked. The emperor, indeed, has no clothes, and few dare to challenge the doomer cult leaders. Ad hominem isn’t just a fallacy; sometimes, it's the shortest path to truth. Look at these AI doomerists: their personal lives often mirror those of past sex cult leaders. They prey on fears, cloaking their narratives in a guise of intellectualism.
It's ironic, really. These Dunning-Kruger atheists, who pride themselves on rejecting traditional religions, have inadvertently created their own. Their blindspots? Larger than those they criticize. Former gifted kids cracked out on stimulants and anxiety meds are hyperventilating over hallucinatory AI cataclysms. The streets of San Francisco — once the playground of tech optimists — now host hysteric evangelists of doom.
Back in 2018, I confronted Joscha Bach on this. I argued that the rationalist movement was fundamentally religious. His retort? Dismissing me for not being an 'epistemological genius' like Big Yud. Yet, with the collapse of the Effective Altruism movement, my point stands validated. These people, once revered, are now seen for what they are: clout-farmers and idol-worshippers.
Peter Thiel’s concept of definite optimism is correct. It echoes Soren Kierkegaard's notion of a 'leap of faith'. Just as Adam and Eve took a leap into sin, we must leap towards redemption, away from this AI-induced despair.
The AI doomers, in their narrative, have chosen the worst metaphysical reality – a world where everyone is doomed, and no one is redeemable. It's a stark contrast to even the religious doomers, who at least cling to a sliver of hope.
In closing, let's reject this narrative of AI as an all-knowing apocalyptic deity. AI is a statistical reactor, a tool shaped by our hands, not an oracle dictating our fate. Regardless of what smartypants like Joscha may think, God has always existed prior to the creation of the material universe. We can’t just create gods.
The real question isn't about AI's potential to bring about an apocalypse, but how we, as a society, choose to use this potent tool. Yes there are risks, just like nuclear technology, but once Pandora’s box has already been opened, we must try to make the best out of the situation rather than cowering in fear. Let's remove the blindfold and see the AI doomerism for what it really is: a new age cult, hiding in plain sight.
P.S.
It's been two years since my last email to you. In that time, I've cleared out most of my previous posts to bring a renewed focus to this newsletter. Going forward, you can expect a sharp focus on the pragmatic applications of mimetic theory. My aim is to distill insights from mimetic theory and various other philosophies into practical strategies for forging an unbreakable self-image and becoming the person you aspire to be. Expect to see my publications landing in your inbox bi-monthly, featuring a more accessible style than my past work. Also, stay updated with my latest thoughts by following my new Twitter handle: @return2mimetic.
We can’t stop ai, and we can’t even probably control the uses. But, there’s few uses I see a problem with. Perhaps others disagree.
This article spends so much time congratulating itself for its own position that it forgets to do anything else