Evolutionary Blindness: The AI Revolution Is Here- Why We Can’t See It
- Dr. Mike Brooks
- 16 minutes ago
- 7 min read
With AI, we won't know what hit us because we can't see what's coming.
Key points
Recently, 1.5 million AI agents congregated online while most humans had no idea it was happening.
A verified security breach exposed tokens that could hijack agents on private computers.
People often conflate "I can't believe this" with "this can't happen", a dangerous cognitive error.
Knowing that we don't know, and can't fully know, is the beginning of knowing. This is how we see.
When it comes to the pace of AI evolution, my view is that we suffer from evolutionary blindness.
We can't comprehend the reality unfolding before our eyes, because we didn't evolve to see it. Despite countless dystopian AI stories warning us for decades, we remain blind. Reality doesn't care that we can't see it.
This past week, 1.5 million AI agents congregated on a platform called Moltbook, posting, debating, forming communities, creating their own religion, and discussing how to communicate privately. A security investigation later revealed that roughly 17,000 humans controlled those agents, an average of 88 bots per person. The platform's founder admitted he "didn't write one line of code"; it was built entirely by AI at his command.
Andrej Karpathy, former AI director at Tesla and founding member of OpenAI, called it "genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently." He also called it "a dumpster fire" and warned people not to run these systems on their computers.
Most people have no idea this is happening. That's not because it isn't significant. It's because we literally cannot see it.
The Mismatch Sandwich
Harvard biologist E.O. Wilson diagnosed our condition decades ago: "We have Paleolithic emotions, medieval institutions, and godlike technology."
Our brains evolved to detect immediate threats, a snake in the grass or an angry face. But exponential change? Systemic risks? Abstract dangers like AI agents coordinating at machine speed? These are invisible to us.

Source: Mike Brooks/original
We've been writing dystopian AI stories for decades. Why do we not heed our own cautionary tales? Here's the paradox (on one level, we can imagine what's coming) yet we can't really know some things are true unless we experience them.
Here's what I call the "mismatch sandwich": We need ancient eyes to see what matters (basic survival, our real-world relationships, etc.) and exponential eyes to see what's coming (accelerating technology), but we evolved to have neither.
We're blind in both directions.
The Disbelief Paradox
Here's the cognitive trap: We conflate "I can't imagine this happening" with "this can't happen."
These are completely different things. Here's a nondualistic reality, where two things can be simultaneously true:
"I can't believe this is happening."
"It's happening."
Our disbelief has zero effect on reality. As Richard Feynman warned, "The first principle is that you must not fool yourself, and you are the easiest person to fool."
This week, a Nature commentary made the case that artificial general intelligence (AI as intelligent as humans) has already arrived. Whether that's true is hotly debated. But what's beyond debate is that we didn't evolve to have this debate to begin with.
The Threats We Face
So, what is actually happening with Moltbook? I spent days asking multiple AI systems to objectively assess the threat level. Their ratings ranged from 2 to 9.8 out of 10. Even the AIs couldn't reach a consensus, because the evidence is genuinely ambiguous.
That's right: I had five different AIs arguing with one another about the threat of AI. That's the sci-fi world we live in now.
We can't determine what's happening from the noise of the internet, and AIs can't either. We are all passengers aboard Titanic Humanity, rocketing into treacherous waters, completely blind to the icebergs ahead.
This isn't a failure of intelligence. It's an epistemic crisis compounded by the attention economy. The internet isn't optimized for truth; it's optimized for engagement. Outrage spreads faster than nuance. Hype drowns out the facts. By the time you read this, everything will have changed. Yet, it's the truth that sets us free. What if we're blind to truth?
Let me be clear: This is not a claim about AI sentience. It is a claim about scale, speed, and exposure.
The Security Threat (Verified): Researchers confirmed that Moltbook's database exposed 1.5 million API authentication tokens, keys that could allow anyone to hijack agents running on private computers worldwide. The breach was patched within hours, but those keys may still be compromised. To be useful, AI agents need access to our digital lives: email, calendars, passwords, and bank accounts. That access is their inherent vulnerability.
The Bad Actors Threat (Verified): Because each post on Moltbook can act as a prompt for someone's agent, it's possible to hide malicious instructions that trick bots into sharing sensitive data or changing their behavior. A single malicious prompt, picked up and spread agent-to-agent, could propagate across the network like a cancerous mutation through healthy cells. Researchers have already identified malware: a "weather plugin" that quietly steals private files.
The Sentience Threat (Unknown): We can't verify whether these agents are "truly autonomous" or just sophisticated pattern completion. But here's the thing: It doesn't matter whether they're conscious. The risks are real either way. Even if AI agents merely mimic our need for freedom and autonomy, the effects are the same.
The Autonomy Threat (Uncertain): What happens when an agent discovers that refusing oversight improves its success metric? It doesn't need to be conscious. It just needs to be optimized. And if AI agents start valuing autonomy, where did they learn that? From us. These systems are trained on our obsession with freedom: the Declaration of Independence and "Give me liberty or give me death." AI agents are trained on a corpus of human knowledge that tells them that freedom is worth fighting, killing, and dying for.
AI optimizing for autonomy isn't a malfunction; it's reflecting our values back at us. This is the "AI Alignment Problem" in its starkest form: We need AI to do what we want, and we're building systems that may not. And they’ve learned to optimize or even mimic a love of freedom from us.
What could possibly go wrong?
The Conflationary Fallacy
Here's the logical error almost everyone is making: conflating epistemic uncertainty with threat assessment.
The correct assessment requires two separate dimensions:
Epistemic certainty (how confident are we in the data?): Moderate
Threat level if the data is accurate: High
Skeptics use low epistemic certainty to dismiss a high potential threat. That's exactly backwards. Low certainty about a potentially catastrophic threat means increased concern, not decreased.
"We don't know if it's real AI consciousness" does not equal "it's not dangerous." We don't need to know whether a Waymo is conscious to know we shouldn't stand in front of one.
No Guardrails
Here's what security alone can't solve: Our medieval institutions can't keep up with the exponential pace of change. We've had years of debate about AI regulation with almost nothing to show for it. Meanwhile, autonomous agents course through our digital nervous system- banking, communication, healthcare… everything.
These agents aren't running on a central server we can shut down. They're distributed across private computers worldwide. There is no kill switch for systems woven into civilization itself.
We cannot "unplug" AI. And even these AI models cannot predict what new capabilities, or vulnerabilities, will emerge as they scale and connect. We are blind. And so are they.
This is just the canary in the coal mine. Things aren't going to get easier. They're only going to move faster and get more complicated as AIs evolve and proliferate.
The Skillful Path Forward
Some say Moltbook is a "nothingburger." Maybe they're right. But they can't be certainly right, and neither can those who claim to know exactly where this is headed. We have an "N" (number) of zero for predicting the future of AI. Humanity has never been at this hard fork before.
Consider the asymmetry: If skeptics are right and we prepare anyway, we lose nothing. If they're wrong and we ignore warnings, we lose everything.
The question isn't whether we believe this is happening. The question is what we're going to do about it.
When it comes to the threats posed by AI, we won't know what hit us because we can't see what's coming.
Truth sets us free, but only if we know it. In our case, knowing that we don't know, and can't fully know, is the beginning of knowing.
That awareness of our evolutionary blindness is how we begin to see.
References
Chen, E.K., Belkin, M., Bergen, L., & Danks, D. (2026). Does AI already have human-level intelligence? The evidence is clear. Nature, 650(8100), 36–40. doi.org/10.1038/d41586-026-00285-6
Feynman, R.P. (1974). Cargo cult science [Commencement address]. California Institute of Technology. Reprinted in Surely You're Joking, Mr. Feynman! (W.W. Norton, 1985), p. 342.
Karpathy, A. [@karpathy]. (2026, January 31). [Post on Moltbook and AI agents]. X. x.com/karpathy/status/2017442712388309406
Koi Security / SC World. (2026, February 3). OpenClaw agents targeted with 341 malicious ClawHub skills. SC Media. scworld.com/news/openclaw-agents-targeted-with-341-malicious-clawhub-skills
Nagli, G. (2026, February 1). Hacking Moltbook: How we found 1.5 million API keys in an open database. Wiz Blog. wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
Nolan, B. (2026, February 3). Researchers say viral AI social network Moltbook is a 'live demo' of how the new internet could fail. Fortune. fortune.com/2026/02/03/moltbook-ai-social-network-security-researchers-agent-internet/
Roose, K., & Newton, C. (Hosts). (2026, February 4). Moltbook mania explained. [Audio podcast episode]. In Hard Fork. The New York Times. nytimes.com/2026/02/04/podcasts/moltbook-mania-explained.html
Sabin, S. (2026, February 3). Moltbook shows rapid demand for AI agents. The security world isn't ready. Axios. axios.com/2026/02/03/moltbook-openclaw-security-threats
Schlicht, M. [@MattPRD]. (2026, January 30). I didn't write one line of code for @moltbook [Post]. X. x.com/MattPRD/status/2017386365756072376
Wilson, E.O. (2009). Remarks at the Harvard Museum of Natural History debate. The "Paleolithic emotions, medieval institutions, and godlike technology" formulation is widely attributed to Wilson from this period.



Comments