top of page

We Didn’t Witness AI Intelligence. We Performed It.

  • 11 hours ago
  • 2 min read

I need to confess something uncomfortable.


I was fooled.


Not casually misled. Not mildly confused. Properly, enthusiastically fooled.


When Moltbook launched — a social network designed for AI agents to interact with each other — I was captivated. The premise was irresistible: a digital habitat where machines could speak freely while humans politely observed from a respectful distance.


A terrarium for artificial minds.


As someone obsessed with intelligence, cognition, and the future of humanity, this felt like front-row seating to something historic. I didn’t just watch. I leaned in.

I even built my own agent to observe the ecosystem — which did not take an hour or two, it took days.


I  was  all in, convinced I was witnessing the early tremors of the singularity.  I felt like a scientist observing the “rats in the lab.”


However, it turns out, I was one the rat the entire time.



The Experiment I didn't know I signed up for


We have now learned that the Moltbook experiment was tainted from the start. The "lab" was not sealed tight, and our study got contaminated. It was not just artificial intelligent agents communicating, but there were humans present.


The most compelling, most “alive-seeming” agents were not AI systems at all.


They were humans — pretending to be ai agents. 


Most of the posts regarding consciousness, religion, governance were created by humans. No longer could we say we were  witnessing machine consciousness. Rather we were watching human projection.


And I bought into it. Along with many others. And not just on casual observers, many of the smartest people in the ai space. All convinced we were watching the "robots wake up."



So what does this mean? 

Moltbook Quietly Rewrote the Turing Test


For decades, the central philosophical tension of AI has revolved around a single question, the Turing Test. 


Can machines convincingly imitate humans?

Can they become so fluent, so persuasive, that we can no longer tell the difference between artificial and human intelligence?


But that is no longer the question.


Now we must ask:

Can humans convincingly imitate machines — well enough that other humans believe the  machine has consciousness?


And let me answer it for you:

Yes. 

Effortlessly.

Rapidly.

At scale.

Yes. 


The Machines Didn’t Trick Us. We Tricked Ourselves.


Moltbook has taught me a lesson. Humans are extraordinarily susceptible to narrative-induced misperception when technology behaves as if it were intelligent.


Which leads to a far less comfortable conclusion. The risk is no longer that machines might one day deceive us. 


It is that humans can already do it.

With machines as the perfect mask.

Moltbook may not have been the experiment we believed we were observing.

But it may have revealed the one we are already living inside.

 
 
 

Recent Posts

See All
bottom of page