đWhat if AI agents are actually us?
The rise of the machine over the human might not happen via workflows, but via habits. We'd then be accelerating it by giving up what originally made us human: our joy to think.
TLDR Summary
An âagentâ is an entity that acts on behalf of someone else and in their interest. âAIâ refers to machines being capable of performing tasks that typically require human intelligence. Putting those two definitions together, an âAI agentâ is then an entity that spreads the intellectual influence of machines.
This is indeed a fitting name for the tools that are all the rage right now. AI agents developed by Big Tech and other software companies are designed to give a digital embodiment to AI. They give AI a structure within it can act and apply its intelligence to perform tasks. AI agents are the vehicles for the proliferation of AI, spreading it into the economy and society.
But what if thatâs not the primary mechanism that facilitates the rise of the machine over the human? What if AI doesnât primarily spread via workflows, but via collective habits instead?
Millions and soon billions of people are using chatbots, many of them on a daily basis. They ask these bots what to do and what to think. The AI then responds with best practices. Its responses are geared to the mean. It will say what is most popular. With nuances, but basically the same to everyone.
Humans are happy to follow these instructions en masse. As a result, we are moving into an age where consensus forms rapidly and randomly based on narratives and hallucinations spread by chatbots. The true agents of AI might not be the tools using AI to perform tasks. It might actually be us.
From physical laziness to intellectual laziness
Humans are lazy. Thatâs a fact and itâs perhaps our greatest strength. It made us come up with technology to make our lives easier. Much of human history is about outsourcing physical tasks. First to animals. Then to machines.
Outsourcing these tasks freed up time and energy to develop religion, politics, culture and science. For the longest time, it seemed that our laziness was limited to the physical dimension. From an intellectual perspective, we have always been eager for more.
This seems to have changed. We donât want to think anymore. Thatâs one of my biggest learning points of the last few years.
This intellectual laziness is fairly new. For example, think about George Orwellâs 1984, published in 1949. His novel was about describing how a dystopian future in a totalitarian surveillance state could look like. His protagonist worked at the Ministry of Truth, a government agency tasked with rewriting historical records.
In Orwellâs mind, you could only manipulate the public opinion by physically changing publicly available documents. He counted on the fact that humans were not intellectually lazy and would actually find the correct information if it was out there. Had he known about the internet, he might not have written his novel.
Today, we know better than him. The truth simply being out there is not sufficient for the mainstream to find it. People are too lazy. They would only find it if the important information multipliers showed them.
AI has put this intellectual laziness on an entirely new level. People are not just outsourcing their information gathering to the most convenient media outlets and influencers. They are outsourcing large amounts of their daily thinking to their chatbots. Even with respect to the core tasks of their own professions.
Outsourcing of thinking for yourself
Itâs obviously happening most prominently in writing. Writing used to be the pinnacle of intellectual curiosity and pride. People have given that away as if itâs a chore like washing the dishes. They have lost all pride in their own thoughts. Look around you online. âItâs not Aâitâs Bâ. The typical AI sing-song is everywhere.
A prominent example is Michael Green, a fund manager with a large online following. A few months ago, he wrote a viral piece on the cost of living in the US where he argued that a family making less than $120,000 after tax in the US canât participate in society. I found that article a cheap and uninspired piece of virtue signaling and pushed back on it in the article below.
There was one aspect about his article that bothered me and that I ultimately chose not to address: the fact that it felt and smelled like it was written by AI. The reason I chose not to address that as an issue was that I believe arguments have to stand on their own feet, irrespective of who (or what) wrote them. Criticizing the form rather than the substance of his points could have diluted my own points.
In a response to someone else, he later confirmed online that he did indeed use AI to write his article. Rather than being ashamed, he framed it as a smart and progressive approach to work.
I criticized his justification. I argued he should take more pride in his own words as they are a manifestation of his thoughts. Thoughts which are the foundation of what he sells to his clients as a financial advisor. I found it absurd to equate the use of a chatbot as a ghostwriter to the use of Excel to perform calculations.
There is obviously a difference between asking a computer to calculate three plus two and asking it to write (sections of) an article. The bigger the task you outsource to a machine, the lower your ownership in the result.
Also, itâs plausible for any reader or client that the calculations performed in an Excel workbook are literally just for productivity. If given enough time, a typical financial spreadsheet could have been populated by hand. The calculations of each cell are simple enough. On the other hand, assume you prompt a chatbot with: âI want to criticize the federal poverty line. Give me reasons why itâs not so useful anymore to define the ability for participation in societyâ. If you then present the output as your own, how can you (or anyone) know you would eventually have been able to come up with the response yourself?
Needless to say, Green felt differently. He sees absolutely no problem in presenting AI output as his own and many of his followers agree with him.
And that makes him an AI agent. An AI agent with 230,000 followers on X who uses his reach and intellectual authority to spread AI created talking points to his users. There are endless examples of people acting and feeling like him. Presenting AI generated content as oneâs own has become so widespread that people consider it completely normal. Itâs obviously not.
But outsourcing of thought isnât just happening for content creation. It happens for important decision making as well. The post below is from an X account with an account with 170,000 followers. He advises his followers to use AI as a financial advisor, pretty much telling them to get instructions from AI on portfolio allocation. I reckon many people are acting like him. Only Sam Altman knows how much money ChatGPT is implicitly managing this way.
These are just two examples that struck enough of a nerve with me to share them with you. There are endless more examples. People let AI tell them how to go on about their relationships and their careers. They let AI decide their political opinions. They let AI raise their kids.
What are the implications?
We were supposed to tell AI what to do. Instead, itâs telling us what to do. In the best instances, these AI instructions are defensible narratives that move the consensus into a more rational direction. In the worst instances, they are detrimental hallucinations causing mass delusions.
To the extent that there actually is a truth to a matter, AI has no means of figuring it out. It doesn't understand reality. It relies on the data fed to it and rank possible responses by popularity. It therefore has to optimize for consensus rather than truth.
For example, if every single internet instance of the sentence "the color of the sky is..." ended with "green" and a user prompted "the color of the sky is?" then the AI would almost certainly respond "green". If the training data is a lie, AI will repeat it.
Whether the consensus opinion is a defensible narrative or a detrimental hallucination, we seem to be moving into an age where consensus forms rapidly and randomly directed by AI models.
I chose super intelligence VIKI from the 2004 movie I, Robot as the cover picture for this article. She established dominance openly by physically and directly opposing human resistance. What we are seeing today is more subtle. There is no superintelligence consciously plotting to take power. Instead, itâs a bunch of statistical models that form a chaotic and unpredictable system of governance that humans voluntarily submit to. In that movie, humanoid robots were the physical embodiments of AI. We take that role ourselves today.
Sincerely,
Rene









The Orwell point hits hard. His whole premise was that you'd need to physically erase the truth for the masses to miss it because he was counting on human curiosity as a natural defense. AI, however, doesn't need to erase anything; it just needs to be more convenient than thinking.
What worries me more than outsourcing writing is outsourcing framing. We don't just have chatbots write for us; we have them define the questions we're asking. That's where identity gets fuzzy.
Once you've outsourced the question, claiming the answer is "yours" is a polite fiction.
A thoughtful piece as always. I agree about the proliferation of "it's not A, it's B" writing, which creates a straw man and shoots it down without argument. I call it Marks & Spencer writing, after the incredibly successful "This is not just food, it's M&S Food" campaign that began 22 years ago. That may just resonate with British readers.
I am not sure, however, that the examples you use show a dangerous trend. My read of Mike Green's comments was that he uses AI just as he uses Excel and other productivity tools. He did not say that AI wrote his article. Also, his argument was against consensus, which is why it provoked such uproar. It would therefore be an example of how to use AI to uncover arguments that go against the grain. Deep down all Mike is saying is what everyone knows. The cost of regulated and monopoly services has outpaced the standard of living and as a result a low six figure salary does not make you rich.
I might also take issue with concerns about standard financial advice for those with less accumulated wealth. Consensus, box-fitting advice is exactly what is served up to these people by human advisers. This is a result of regulation and the lack of profitability of customers with relatively small amounts to invest. Why pay 2% when you can get the same advice for free?
There is a real risk that we outsource thinking to AI. I suspect however, that those of us who think of ourselves as thinkers, use settings and prompts that ask AI to challenge our arguments. The people who just want to be told what to do and think might as well ask AI. The responses are unlikely to be any more one-size-fits-all than government advice or generic marketing.