Disclaimer: The information contained in this article is not and should not be construed as investment advice. This is my investing journey and I simply share what I do and why I do that for educational and entertainment purposes.
This article is entirely free to read.
TLDR Summary
Artificial intelligence is mostly pattern-matching and pattern-matching is an important method of intelligence to acquire and apply knowledge and skills. However, general intelligence is more than that. It includes intention and consciousness that is required to develop something brand-new, something that is not based on training data.
Had Gutenberg had an AI at his disposal, it would have helped him optimize the task of manually copying books. A human was necessary to come up with the idea of casting letters, arrange them into pages and reprint them countless times. AI is great at accelerating the development of something that has an existing framework. It’s great at going one to two. But it can’t go zero to one. It wouldn’t have been able to go from caveman to astronaut like humans have done.
While it is fascinating to debate the nature of intelligence, it’s also a fruitless effort. What matters more is the realization that today’s AI is a powerful tool that dramatically improves the distribution of knowledge which accelerates the acquisition of new knowledge. It puts you on speaker with exactly the humans of the past and present that have the answer to your problems. That boosts productivity and ultimately drives economic growth. This will be true irrespective of whether today’s AI can become AGI or not. It’s not about how far and fast AI itself progresses. It’s about what new products will be enabled through this technology.
At the individual level, we don’t need to fear this development as long as we stay original in our ideas and work results. AI will only commoditize what already exists.
Deep Learning won’t get us to AGI…
Artificial general intelligence (AGI) is commonly defined as an artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks.
What we refer to as AI today consists primarily of computer models that generate outputs based on statistics, i.e. the analysis and interpretation of data. In most cases vast amounts of data. The model parses through this data to see patterns from which it then draws conclusions.
Humans do that, too. Many people even argue that pattern-matching is in fact virtually everything human intelligence is about. We observe the world and draw conclusions that drive our decisions.
However, human intelligence is more than just pattern-matching. Mind Prison wrote a fantastic piece on that recently that I highly recommend to read.
They assert that we are so mesmerized by the performance of our new AI tools that we assign more substance to them than there really is.
“Any sufficiently advanced technology is indistinguishable from magic”
— Arthur C. Clarke
“Any sufficiently advanced pattern-matching is indistinguishable from intelligence”
— Mind Prison
They argue that human intelligence differs from today’s artificial intelligence because of its self-awareness and conceptual thinking. We can observe and perceive reality even when there is no prior data. AI can merely pretend or simulate that understanding. It doesn’t truly have it. That’s where its hallucinations come from.
The way I see it, the main difference is that we live consciously and intentionally which is the prerequisite of developing things that are totally new. It enabled us to invent the wheel, the printing press, the steam engine. It enabled us to conquer the world.
Let’s imagine the following scenario to illustrate what I mean by that: Let’s say all humans where replaced by a species of humanoid robots 1,000 years ago and that these robots were equipped with an AI of today’s standard that was given the mission to maximize the species’ reach and the number of its members. Let’s also assume that this AI possessed all human knowledge of that time as the basis for this mission. And finally, since there was no electricity at that time, let’s assume that these robots were powered by nutrition that humans were consuming back then.
Would this AI have developed airplanes and spaceships to accelerate its expansion? Or would it have continued to use and improve ships?
Would it have developed the printing press to accelerate the storage and distribution of knowledge? Or would there still be legions of robots copying books by hand?
I think the case is pretty strong that there would be no airplanes and spaceships today because there were no data on the physiology of bird wings at that time. Would one of the robots have set out by itself to study birds to get the idea and put it into action? I doubt it. Same with the printing press. Would a robot have tinkered with different forms of knowledge distribution even if it doesn’t necessarily lead anywhere? I doubt that, too. If the creator of the AI didn’t plant the seed for a certain type of innovation, it wouldn’t have developed it.
AI has undoubtedly accelerated many activities, for example software development, drug development and digital entertainment. But these are innovations within existing frameworks. Machine written code is compiled from code fragments that humans have written before elsewhere. AI drug development works with mechanisms of action that humans have defined in academia before. And AI art uses human art as a reference point when prompted. Humans do that, too. But occasionally, new forms of expression occur. An AI would simply cycle through the same forms of art over and over again if left to itself.
So far, there is not a single instance where AI has developed something brand-new. And it seems to me that its architecture prevents that from ever happening.
…but in the end, it doesn't even matter.
I can imagine that this sounds unnecessarily skeptical in your ears. Perhaps you even think I am just coping with the inevitable fall of humanity into insignificance. Perhaps I am. But being an AGI bear doesn’t require being an AI bear. I simply don’t think AGI is needed to make the current innovation a success story. Debating the nature of intelligence is fascinating, but in a way also pointless. To explain that, I want to put this innovation into a broader context.
There are more than eight billion people on this planet. Every day they live their lives. They face challenges and overcome them, often with ingenuity and creativity. They come up with new solutions to existing problems which cause new problems to which they respond with even newer solutions.
This process is most powerful when they collaborate and leverage the synergies from the combination of their different skills and talents. This collaboration requires an effective mechanism to document, store and distribute the knowledge they acquire along the way.
The invention of the internet was a milestone in that process. Most of the collective knowledge of our species is now stored on millions of connected devices around the world. Search engines (most importantly Google’s) have organized this knowledge which made it readily available. They scanned the entire internet for websites and put them into a massive digital library where they were ranked for popularity and keywords.
It’s near impossible to know how much search engines have fueled economic growth over the last three decades, but I like to think that it’s a quite sizeable chunk of it. Think about whatever productive activity you have worked on over the last thirty years and whether it would have been possible without a search engine like Google. The internet may have never taken off. What use is a tool that has all human knowledge if it’s near impossible to find what you need?
I view much of today’s AI as an improvement of the function previously served by search engines. It’s a huge leap forward in the distribution of knowledge. You don’t have to search the information you need in off-the-shelf websites anymore. Instead, you get it tailor-made and bite-sized. A chatbot like ChatGPT allows you to communicate with billions of humans of the present and the past. And it puts you on speaker with exactly the person you need to move on with your project. Think of the lines written by ChatGPT as an echo of the words uttered by all humans that have previously worked on the topic you are currently studying.
Framing AI like a novel knowledge distribution tool comes with several noteworthy conclusions:
At a macro level, this is obviously a monstrous productivity gain that provides fertile ground for economic growth. For me personally, it’s important not to lose track of that when I go into my rabbit hole of fiscal flows, interest rates and trade dynamics.
At a micro level, AI is a tool for humans, not their replacement. For every one of us, this means that we don’t have to fear this technology as long as we are original in our ideas and work results. AI will only commoditize what already exists.
AI’s most powerful impact will be in those areas where it has the strongest impact on human productivity. For example, any innovation that has so far been limited by programming capacity now has the potential for accelerated adoption. Full self-driving and humanoid robots come to mind.
Sincerely,
Rene
I read another Substack from a prominent expert on the impact of AI. He shared this X post from a PhD researcher who was blown away by a conversation he had with an AI that allegedly came up with a novel way to advance a cancer treatment. To check this claim I simply googled the parts of the approach that were the “novel idea” and sure enough the whole idea had been published in a paper months prior. So, I came to the same conclusion as you: I don’t see any evidence that it can do anything novel, but its ability to, in this example cut through all recent research relevant to what someone is working on and catch them up on things without having to sift through all the research themselves is a pretty huge productivity boost.
To quantify Kev's point, run a modified Touring test where a large sample of subjects have two conversations with two other subjects. Based on their conversation they grade how intelligent they consider each subject to be. Randomly introduce the latest AI as some of the subjects in the study. How do you think they'd stack up? The more specialised the questions are, the better the model's answers are vs the average person. Current models are better problem solvers than the vast majority of humans and ridiculously more knowledgeable. Mind prison saying there has been no progress towards general intelligence is either shock factor click bait or naive. Probably the former.
WRT their developing anything new, they do that extremely well. They're better story tellers than I, better artists than I, and generally more creative than I am. They absolutely have novel thought. The point you're making is I think more one of culturally significant invention. And for that I agree with you that current AI doesn't do that yet, but I disagree that they're not moving towards it, and I disagree that it's due to lack of intelligence.
Things like the printing press and airplanes were not sudden inspirations that manifested in a revolution the next day. They are the result of a spark of thought followed by a long period of evolution. Prototyping and experimentation develop an inventor's understanding, bringing them closer and closer to something eventually viable. LLMs can't do this, but let's explore why..
First, stating the obvious, they're a language model with no body so lack a means of prototyping. Thought experiments get you only so far in the real world. Even as they evolve from language models into Human Intelligence Models (HIM), they need a physical counterpart to play the role you're expecting of them, not just intelligence.
Second, and probably the real crux of the point you're making, they currently lack a viable long term feedback loop. Context windows are getting bigger and bigger, but the base foundational models are not incrementally improved based on the output they themselves produce. They need an expensive and time consuming training process to fundamentally improve. To match human intelligence they need to be able to learn not just temporarily, but permanently. This is IMO the biggest gap between where we are now and where we need to be for AI's creative thought to become culturally significant. However, just as it took the Wright brothers a while to get flight figured out, it will take us a while to figure this out. That's how significant progress happens.
-j