The rise of the machine over the human might not happen via workflows, but via habits. We'd then be accelerating it by giving up what originally made us human: our joy to think.
The Orwell point hits hard. His whole premise was that you'd need to physically erase the truth for the masses to miss it because he was counting on human curiosity as a natural defense. AI, however, doesn't need to erase anything; it just needs to be more convenient than thinking.
What worries me more than outsourcing writing is outsourcing framing. We don't just have chatbots write for us; we have them define the questions we're asking. That's where identity gets fuzzy.
Once you've outsourced the question, claiming the answer is "yours" is a polite fiction.
A thoughtful piece as always. I agree about the proliferation of "it's not A, it's B" writing, which creates a straw man and shoots it down without argument. I call it Marks & Spencer writing, after the incredibly successful "This is not just food, it's M&S Food" campaign that began 22 years ago. That may just resonate with British readers.
I am not sure, however, that the examples you use show a dangerous trend. My read of Mike Green's comments was that he uses AI just as he uses Excel and other productivity tools. He did not say that AI wrote his article. Also, his argument was against consensus, which is why it provoked such uproar. It would therefore be an example of how to use AI to uncover arguments that go against the grain. Deep down all Mike is saying is what everyone knows. The cost of regulated and monopoly services has outpaced the standard of living and as a result a low six figure salary does not make you rich.
I might also take issue with concerns about standard financial advice for those with less accumulated wealth. Consensus, box-fitting advice is exactly what is served up to these people by human advisers. This is a result of regulation and the lack of profitability of customers with relatively small amounts to invest. Why pay 2% when you can get the same advice for free?
There is a real risk that we outsource thinking to AI. I suspect however, that those of us who think of ourselves as thinkers, use settings and prompts that ask AI to challenge our arguments. The people who just want to be told what to do and think might as well ask AI. The responses are unlikely to be any more one-size-fits-all than government advice or generic marketing.
The Orwell point hits hard. His whole premise was that you'd need to physically erase the truth for the masses to miss it because he was counting on human curiosity as a natural defense. AI, however, doesn't need to erase anything; it just needs to be more convenient than thinking.
What worries me more than outsourcing writing is outsourcing framing. We don't just have chatbots write for us; we have them define the questions we're asking. That's where identity gets fuzzy.
Once you've outsourced the question, claiming the answer is "yours" is a polite fiction.
fully agreed
A thoughtful piece as always. I agree about the proliferation of "it's not A, it's B" writing, which creates a straw man and shoots it down without argument. I call it Marks & Spencer writing, after the incredibly successful "This is not just food, it's M&S Food" campaign that began 22 years ago. That may just resonate with British readers.
I am not sure, however, that the examples you use show a dangerous trend. My read of Mike Green's comments was that he uses AI just as he uses Excel and other productivity tools. He did not say that AI wrote his article. Also, his argument was against consensus, which is why it provoked such uproar. It would therefore be an example of how to use AI to uncover arguments that go against the grain. Deep down all Mike is saying is what everyone knows. The cost of regulated and monopoly services has outpaced the standard of living and as a result a low six figure salary does not make you rich.
I might also take issue with concerns about standard financial advice for those with less accumulated wealth. Consensus, box-fitting advice is exactly what is served up to these people by human advisers. This is a result of regulation and the lack of profitability of customers with relatively small amounts to invest. Why pay 2% when you can get the same advice for free?
There is a real risk that we outsource thinking to AI. I suspect however, that those of us who think of ourselves as thinkers, use settings and prompts that ask AI to challenge our arguments. The people who just want to be told what to do and think might as well ask AI. The responses are unlikely to be any more one-size-fits-all than government advice or generic marketing.
Thank you for your thoughtful comment!